As it is we known from arguments due to the Scottish philosopher David Hume (1978, the Austrian philosopher Ludwig Wittgenstein (1953), the American philosopher Nelson Goodman ()1972) and the American logician and philosopher Aaron Saul Kripke (1982), that in all cases of empirical abduction, and of training in the use of a word, data underdetermining the theories. Th is moral is emphasized by the American philosopher Willard van Orman Quine (1954, 1960) as the principle of the undetermined theory by data. But we, nonetheless, do abduce adequate theories in silence, and we do learn the meaning of words. And it could be bizarre to suggest that all correct scientific theories or the facts of lexical semantics are innate.
But, innatists rely, when the empiricist relies on the underdermination of theory by data as a counter-example, a significant disanalogousness with language acquisition is ignored: The abduction of scientific theories is a difficult, labourious process, taking a sophisticated theorist a great deal of time and deliberated effort. First language acquisition, by contrast, is accomplished effortlessly and very quickly by a small child. The enormous relative ease with which such a complex and abstract domain is mastered by such a naïve ‘theorist’ is evidence for the innateness of the knowledge achieved.
Empiricist such as the American philosopher Hilary Putnam (1926-) have rejoined that innatists under-estimate the amount of time that language learning actually takes, focussing only on the number of years from the apparent onset of acquisition to the achievement of relative mastery over the grammar. Instead of noting how short this interval, they argue, one should count the total number of hours spent listening to language and speaking during h time. That number is in fact quite large and is comparable to the number of hours of study and practice required the acquisition of skills that are not argued to derive from innate structures, such as chess playing or musical composition. Hence, they are taken into consideration. Language learning looks like one more case of human skill acquisition than like a special unfolding of innate knowledge.
Innatists, however, note that while the case with which most such skills are acquired depends on general intelligence, language is learned with roughly equal speed, and to roughly the same level of general intelligence. In fact even significantly retarded individuals, assuming special language deficit, acquire their native language on a tine-scale and to a degree comparable to that of normally intelligent children. The language acquisition faculty, hence, appears to allow access to a sophisticated body of knowledge independent of the sophistication of the general knowledge of the language learner.
Empiricist’s reply that this argument ignores the centrality of language in a wide range of human activities and consequently the enormous attention paid to language acquisition by retarded youngsters and their parents or caretakers. They argue as well, that innatists overstate the parity in linguistic competence between retarded children and children of normal intelligence.
Innatists point out that the ‘modularity’ of language processing is a powerful argument for the innateness of the language faculty. There is a large body of evidence, innatists argue, for the claim that the processes that subserve the acquisition, understanding and production of language are quite distinct and independent of those that subserve general cognition and learning. That is to say, that language learning and language processing mechanisms and the knowledge they embody are domain specific - grammar and grammatical learning and utilization mechanisms are not used outside of language processing. They are informationally encapsulated - only linguistic information is relevant to language acquisition and processing. They are mandatory, but language learning and language processing are automatic. Moreover, language is subserved by specific dedicated neural structures, damage to which predictable and systematically impairs linguistic functioning. All of this suggests a specific ‘mental organ’, to use Chomsky’s phrase, that has evolved in the human cognitive system specifically in order to make language possible. The specific structure is organ simultaneously constrains the range of possible human language s and guide the learning of a child’s target language, later masking rapid on-line language processing possible. The principles represented in this organ constitute the innate linguistic knowledge of the human being. Additional evidence for the early operation of such an innate language acquisition module is derived from the many infant studies that show that infants selectively attend to sound streams that are prosodically appropriate that have pauses at clausal boundaries, and that contain linguistically permissible phonological sequence.
It is fair to ask where we get the powerful inner code whose representational elements need only systematic construction to express, for example, the thought that cyclotrons are bigger than black holes. But on this matter, the language of thought theorist has little to say. All that ‘concept’ learning could be (assuming it is to be some kind of rational process and not due to mere physical maturation or a bump on the head). According to the language of thought theorist, is the trying out of combinations of existing representational elements to see if a given combination captures the sense (as evinced in its use) of some new concept. The consequence is that concept learning, conceived as the expansion of our representational resources, simply does not happen. What happens instead is that the work with a fixed, innate repertoire of elements whose combination and construction must express any content we can ever learn to understand.
Representationalism typifies the conforming generality for which of its inclusive manner that by and large induce the doctrine that the mind (or sometimes the brain) works on representations of the things and features of things that we perceive or thing about. In the philosophy of perception the view is especially associated with the French Cartesian philosopher Nicolas Malebranche (1638-1715) and the English philosopher John Locke (1632-1704), who, holding that the mind is the container for ideas, held that of our real ideas, some are adequate, and some are inadequate. Those that have inadequateness to those represented as archetypes that the mind supposes them taken from which it tends them to stand for, and to which it refers them. The problem in this account were mercilessly exposed by the French theologian and philosopher Antoine Arnauld (1216-94) and the French critic of Cartesianism Simon Foucher (1644-96), writing against Malebranche, and by the idealist George Berkeley, writing against Locke. The fundamental problem is that the mind is ‘supposing’ its ds to represent something else, but it has no access to this something else, with the exception by forming another idea. The difficulty is to understand how the mind ever escapes from the world of representations, or, acquire genuine content pointing beyond themselves in more recent philosophy, the analogy between the mind and a computer has suggested that the mind or brain manipulate signs and symbols, thought of as like the instructions in a machine’s program of aspects of the world. The point is sometimes put by saying that the mind, and its theory, becomes a syntactic engine rather than a semantic engine. Representation is also attacked, at least as a central concept in understanding the ‘pragmatists’ who emphasize instead the activities surrounding a use of language than what they see as a mysterious link between mind and world.
Representations, along with mental states, especially beliefs and thought, are said to exhibit ‘intentionality’ in that they refer or to stand for something other than of what is the possibility of it being something else. The nature of this special property, however, has seemed puling. Not only is intentionality often assumed to be limited to humans, and possibly a few other species, but the property itself appears to resist characterization in physicalist terms. The problem is most obvious in the case of ‘arbitrary’ signs, like words, where it is clear that there is no connection between the physical properties of a word and what it demotes, and, yet it remains for Iconic representation.
Early attempts tried to establish the link between sign and object via the mental states of the sign and symbols’ user. A symbol # stands for ✺ for ‘S’ if it triggers a ✺-idea in ‘S’. On one account, the reference of # is the ✺idea itself. Open the major account, the denomination of # is whatever the ✺-idea denotes. The first account is problematic in that it fails to explain the link between symbols and the world. The second is problematic in that it just shifts the puzzle inward. For example, if the word ‘table’ triggers the image ‘‒’ or ‘TABLE’ what gives this mental picture or word any reference of all, let alone the denotation normally associated with the word ‘table’?
An alternative to these Mentalistic theories has been to adopt a behaviouristic analysis. Wherefore, this account # denotes ✺ for ‘S’ is explained along the lines of either (1) ‘S’ is disposed to behave to # as to ✺: , or (2) ‘S’ is disposed to behave in ways appropriate to ✺ when presented #. Both versions prove faulty in that the very notions of the behaviour associated with or appropriate to ✺ are obscure. In addition, once seems to be no reasonable correlations between behaviour toward sign and behaviour toward their objects that is capable of accounting for the referential relations.
A currently influential attempt to ‘naturalize’ the representation relation takes its use from indices. The crucial link between sign and object is established by some causal connection between ✺ and #, whereby it is allowed, nonetheless, that such a causal relation is not sufficient for full-blown intention representation. An increase in temperature causes the mercury to raise the thermometer but the mercury level is not a representation for the thermometer. In order for # to represent ✺ to S’s activities. The flunctuational economy of S’s activity. The notion of ‘function’, in turn is yet to be spelled out along biological or other lines so as to remain within ‘naturalistic’ constraints as being natural. This approach runs into problems in specifying a suitable notion of ‘function’ and in accounting for the possibility of misrepresentation. Also, it is no obvious how to extend the analysis to encompass the semantical force of more abstract or theoretical symbols. These difficulties are further compounded when one takes into account the social factors that seem to play a role in determining the denotative properties of our symbols.
The problems faced in providing a reductive naturalistic analysis of representation has led many to doubt that this task is achieved or necessary. Although a story can be told about some words or signs what were learned via association of other causal connections with their referents, there is no reason to believe ht the ‘stand-for’ relation, or semantic notions in general, can be reduced to or eliminated in favour of non-semantic terms.
Although linguistic and pictorial representations are undoubtedly the most prominent symbolic forms we employ, the range of representational systems human understand and regularly use is surprisingly large. Sculptures, maps, diagrams, graphs. Gestures, music nation, traffic signs, gauges, scale models, and tailor’s swatches are but a few of the representational systems that play a role in communication, though, and the guidance of behaviour. Even, the importance and prevalence of our symbolic activities has been taken as a hallmark of human.
What is it that distinguishes items that serve as representations from other objects or events? And what distinguishes the various kinds of symbols from each other? As for the first question, there has been general agreement that the basic notion of a representation involves one thing’s ‘standing for’, ‘being about’, referring to or denoting’ something else. The major debates have been over the nature of this connection between a reorientation and that which it represents. As for the second question, perhaps, the most famous and extensive attempt to organize and differentiate among alternative forms of representation is found in the works of the American philosopher of science Charles Sanders Peirce (1839-1914) who graduated from Harvard in 1859, and apart from lecturing at John Hopkins university from 1879 to 1884, had almost no teaching, nonetheless, Peirce’s theory of signs is complex, involving a number of concepts and distinctions that are no longer paid much heed. The aspects of his theory that remains influential and ie widely cited is his division of signs into Icons, Indices and Symbols. Icons are the designs that are said to be like or resemble the things they represent, e.g., portrait painting. Indices are signs that are connected in their objects by some causal dependency, e.g., smoke as a sign of fire. Symbols are those signs that are used and related to their object by virtue of use or associations: They a arbitrary labels, e.g., the word ‘table’. This tripartite division among signs, or variants of this division, is routinely put forth to explain differences in the way representational systems are thought to establish their links to the world. Further, placing a representation in one of the three divisions has been used to account for the supposed differences between conventional and non-conventional representations, between representations that do and do not require learning to understand, and between representations, like language, that need to be read, and those which do not require interpretation. Some theorbists, moreover, have maintained that it is only the use of symbols that exhibits or indicates the presence of mind and mental states.
Over the years, this tripartite division of signs, although often challenged, has retained its influence. More recently, an alterative approach to representational systems (or as he calls them ‘symbolic systems’) has been put forth by the American philosopher Nelson Goodman (1906-98) whose classical problem of ‘induction’ is often phrased in terms of finding some reason to expect that nature is uniform, in Fact, Fiction, and Forecast (1954) Goodman showed that we need in addition some reason for preferring some uniformities to others, for without such a selection the uniformity of nature is vacuous, yet Goodman (1976) has proposed a set of syntactic and semantic features for categorizing representational systems. His theory provided for a finer discrimination among types of systems than a philosophy of science and language as partaken to and understood by the categorical elaborations as announced by Peirce. What also emerges clearly is that many rich and useful systems of representation lack a number of features taken to be essential to linguistic or sentential forms of representation, e.g., discrete alphabets and vocabularies, syntax, logical structure, inferences rules, compositional semantics and recursive e compounding devices.
As a consequence, although these representations can be appraised for accuracy or correctness. It does not seem possible to analyse such evaluative notion along the lines of standard truth theories, geared as they are to the structure found in sentential systems.
In light of this newer work, serious questions have been raised at the soundness of the tripartite division and about whether various of the psychological and philosophical claims concerning conventionality, learning, interpretation, and so forth, that have been based on this traditional analysis, can be sustained. It is of special significance e that Goodman has joined a number of theorists in rejecting accounts of Iconic representation in terms of resemblance. The rejection has ben twofold, first, as Peirce himself recognized, resemblance is not sufficient to establish the appropriate referential relations. The numerous prints of lithograph do not represent one another. Any more than an identical twin represent his or her sibling. Something more than resemblance is needed to establish the connection between an Icon or picture and what it represents. Second, since Iconic representations lack as may properties as they share with their referents, sand certain non-Iconic symbol can be placed vin correspondences with their referents. It is difficult to provide a non-circular account of what the similarity distinguishes as Icons from other forms of representation. What is more, even if these two difficulties could be resolved, it would not show that the representational function of picture can be understood independently of an associated system of interpretations. The design, □, may be a picture of a mountain of the economy in a foreign language. Or it may have no representational significance at all. Whether it is a representation and what kind of representation it uses, is relative to a system of interpretation.
If so, then, what is the explanatory role of providing reasons for our psychological states and intentional acts? Clearly part of this role comes from the justificatory nature of the reason-giving relation: ‘Things are made intelligible by being revealed to be, or to approximate to being, as they rationally ought to be’. For some writers the justificatory and explanatory tasks of reason-giving simple coincide. The manifestation of rationality is seen as sufficient to explain states or acts quite independently of questions regarding causal origin. Within this model the greater the degree of rationality we can detect, the more intelligible the sequence will b e. where there is a breakdown in rationality, as in cases of weakness of will or self-deception, there is a corresponding breakdown in our ability to make the action/belief intelligible.
The equation of the justificatory and explanatory role of rationality links can be found within two quite distinct picture. One account views the attribute of rationality from a third-person perspective. Attributing intentional states to others, and by analogy to ourselves, is a matter of applying them of a certain pattern of interpretation. we ascribe that ever states enable us to make sense of their behaviour as conforming to a rational pattern. Such a mode of interpretation is commonly an ex post facto affair, although such a mode of interpretation can also aid prediction. Our interpretations are never definitive or closed. They are always open to revision and modification in the light of future behaviour. If so extreme a degree revision enables people as a whole to appear more rational. Where we fail to detect of seeing a system then we give up the project of seeing a system as rational and instead seek explanations of a mechanistic kind.
The other picture is resolutely firs-personal, linked to the claimed prospectively of rationalizing explanations we make an action, for example, intelligible by adopting the agent’s perspective on it. Understanding is a reconstruction of actual or possible decision making. It is from such a first-personal perspective that goals are detected as desirable and the courses of action appropriated to the situation. The standpoint of an agent deciding how to act is not that of an observer predicting th next move. When I found something desirable and judge an act in an appropriate rule for achieving it, I conclude that a certain course of action should be taken. This is different from my reflecting on my past behaviour and concluding that I will do ‘X’ in the future.
For many writers, it is, nonetheless, the justificatory and explanatory role of reason cannot simply be equated. To do so fails to distinguish well-formed cases thereby I believe or act because of these reasons. I may have beliefs but your innocence would be deduced but nonetheless come to believe you are innocent because you have blue eyes. Yet, I may have intentional states that give altruistic reasons in the understanding for contributing to charity but, nonetheless, out of a desire to earn someone’s good judgment. In both these cases. Even though my belief could be shown be rational in the light of other beliefs, and my action, of whether the forwarded belief become desirously actionable, that of these rationalizing links would form part of a valid explanation of the phenomena concerned. Moreover, cases inclined with an inclination toward submission. As I continue to smoke although I judge it would be better to abstain. This suggests, however, that the mere availability of reasoning cannot, least of mention. , have the quality of being of itself an sufficiency to explain why it occurred.
If we resist the equation of the justificatory and explanatory work of reason-giving, we must look fora connection between reasons and action/belief in cases where these reasons genuinely explain, which is absent otherwise to mere rationalizations (a connection that is present when enacted on the better of judgements, and not when failed). Classically suggested, in this context is that of causality. In cases of genuine explanation, the reason-providing intentional states are applicable stimulations whose cause of holding to belief/actions for which they also provide for reasons. This position, in addition, seems to find support from considering the conditional and counter-factuals that our reason-providing explanations admit as valid, only for which make parallel those in cases of other causal explanations. Imagine that I am approaching the Sky Dome’s executives suites looking fo e cafêteria. If I believe the cafê is to the left, I turn accordingly. If my approach were held steadfast for which the Sky Dome has, for itself the explanation that is simply by my desire to find the cafê, then in the absence of such a desire I would not have walked in the direction that led toward the executive suites, which were stationed within the Sky Dome. In general terms, where my reasons explain my action, then the presence to the future is such that for reasons were, in those circumstances, necessary for the action and, at least, made probable for its occurrence. These conditional links can be explained if we accept that the reason-giving link is also a causal one. Any alternative account would therefore also need to accommodate them.
The defence of the view that reasons are causes for which seems arbitrary, least of mention, ‘Why does explanation require citing the cause of the cause of a phenomenon but not the next link in the chain of causes? Perhaps what is not generally true of explanation is true only of mentalistic explanation: Only in giving the latter type are we obliged to give the cause of as cause. However, this too seems arbitrary. What is the difference between mentalistic and non-mentalistic explanation that would justify imposing more stringent restrictions on the former? The same argument applies to non-cognitive mental stares, such as sensations or emotions. Opponents of behaviourism sometimes reply that mental states can be observed: Each of us, through ‘introspection’, can observe at least some mental states, namely our own, least of mention, those of which we are conscious.
To this point, the distinction between reasons and causes is motivated in good part by a desire to separate the rational from the natural order. However, its probable traces are reclined of a historical coefficient of reflectivity as Aristotle’s similar (but not identical) distinction between final and efficient cause, engendering that (as a person, fact, or condition) which proves responsible for an effect. Recently, the contrast has been drawn primarily in the domain or the inclining inclinations that manifest some territory by which attributes of something done or effected are we to engage of actions and, secondarily, elsewhere.
Many who have insisted on distinguishing reasons from causes have failed to distinguish two kinds of reason. Consider its reason for sending a letter by express mail. Asked why id so, I might say I wanted to get it there in a day, or simply, to get it there in as day. Strictly, the reason is repressed by ‘to get it there in a day’. But what this express to my reason only because I am suitably motivated: I am in a reason state, as wanting to get the letter there in a day. It is reason states - especially wants, beliefs and intentions - and not reasons strictly so called, that are candidates for causes. The latter are abstract contents of propositional altitudes: The former are psychological elements that play motivational roles.
If reason states can motivate, however, why (apart from confusing them with reasons proper) deny that they are causes? For one can say that they are not events, at least in the usual sense entailing change, as they are dispositional states (this contrasts them with occurrences, but not imply that they admit of dispositional analysis). It has also seemed to those who deny that reasons are causes that the former justify as well as explain the actions for which they are reasons, whereas the role of causes is at most to explain. As other claim is that the relation between reasons (and for reason states are often cited explicitly) and the actions they explain is non-contingent, whereas the relation causes to their effects is contingent. The ‘logical connection argument’ proceeds from this claim to the conclusion that reasons are not causes.
These arguments are inconclusive, first, even if causes are events, sustaining causation may explain, as where the [states of] standing of a broken table is explained by the (condition of) support of staked boards replacing its missing legs. Second, the ‘because’ in ‘I sent it by express because I wanted to get it there in a day, so in some semi-causal explanation would at best be construed as only rationalizing, than justifying action. And third, if any non-contingent connection can be established between, say, my wanting something and the action it explains, there are close causal analogism such as the connection between brining a magnet to iron filings and their gravitating to it: This is, after all, a ‘definitive’ connection, expressing part of what it is to be magnetic, yet the magnet causes the fillings to move.
There I then, a clear distinction between reasons proper and causes, and even between reason states and event causes: But the distinction cannot be used to show that the relations between reasons and the actions they justify is in no way causal. Precisely parallel points hold in the epistemic domain (and indeed, for all similarly admit of justification, and explanation, by reasons). Suppose my reason for believing that you received it today is that I sent it by express yesterday. My reason, strictly speaking, is that I sent it by express yesterday: My reason state is my believing this. Arguably reason justifies the further proposition I believe for which it is my reason and my reason state - my evidence belief - both explains and justifies my belief that you received the letter today. I an say, that what justifies that belief is [in fact] that I sent the letter by express yesterday, but this statement expresses my believing that evidence proposition, and you received the letter is not justified, it is not justified by the mere truth of the proposition (and can be justified even if that proposition is false).
Similarly, there are, for belief for action, at least five main kinds of reason (1) normative reasons, reasons (objective grounds) there are to believe (say, to believe that there is a green-house-effect): (2) Person-relative normative reasons, reasons for [say] me to believe, (3) subjective reasons, reasons I have to believe (4) explanatory reasons, reasons why I believe, and (5) motivating reasons for which I believe. Tenets of (1) and (2) are propositions and thus, not serious candidates to be causal factors. The states corresponding to (3) may not be causal elements. Reasons why, tenet (4) are always (sustaining) explainers, though not necessarily even prima facie justifier, since a belief can be casually sustained by factors with no evidential value. Motivating reasons are both explanatory and possess whatever minimal prima facie justificatory power (if any) a reason must have to be a basis of belief.
Current discussion of the reasons-causes issue has shifted from the question whether reason state can causally explain to the perhaps, deeper questions whether they can justify without so explaining, and what kind of causal states with actions and beliefs they do explain. ‘Reliabilist’ tend to take as belief as justified by a reason only if it is held ast least in part for that reason, in a sense implying, but not entailed by, being causally based on that reason. ‘Internalists’ often deny this, as, perhaps, thinking we lack internal access to the relevant causal connections. But Internalists need internal access to what justified - say, the reason state - and not to the (perhaps quite complex) relations it bears the belief it justifies, by virtue for which it does so. Many questions also remain concerning the very nature of causation, reason-hood, explanation and justification.
Nevertheless, for most causal theorists, the radical separation of the causal and rationalizing role of reason-giving explanations is unsatisfactory. For such theorists, where we can legitimately point to an agent’s reasons to explain a certain belief or action, then those features of the agent’s intentional states that render the belief or action reasonable must be causally relevant in explaining how the agent came to believe or act in a way which they rationalize. One way of putting this requirement is that reason-giving states not only cause but also causally explain their explananda.
The explanans/explanandum are held of a wide currency of philosophical discoursing because it allows a certain succinctness which is unobtainable in ordinary English. Whether in science philosophy or in everyday life, one does often offers explanation s. the particular statement, laws, theories or facts that are used to explain something are collectively called the ‘explanans’, and the target of the explanans - the thing to be explained - is called the ‘explanandum’. Thus, one might explain why ice forms on the surface of lakes (the explanandum) in terms of the special property of water to expand as it approaches freezing point together with the fact that materials less dense than liquid water float in it (the explanans). The terms come from two different Latin grammatical forms: ‘Explanans’ is the present participle of the verb which means explain: And ‘explanandum’ is a direct object noun derived from the same verb.
The assimilation in the likeness as to examine side by side or point by point in order to bring such in comparison with an expressed or implied standard where comparative effects are both considered and equivalent resemblances bound to what merely happens to us, or to parts of us, actions are what we do. My moving my finger is an action to be distinguished from the mere motion of that finger. My snoring likewise, is not something I ‘do’ in the intended sense, though in another broader sense it is something I often ‘do’ while asleep.
The contrast has both metaphysical and moral import. With respect to my snoring, I am passive, and am not morally responsible, unless for example, I should have taken steps earlier to prevent my snoring. But in cases of genuine action, I am the cause of what happens, and I may properly be held responsible, unless I have an adequate excuse or justification. When I move my finger, I am the cause of the finger’s motion. When I say ‘Good morning’ I am the cause of the sounding expression or utterance. True, the immediate causes are muscle contractions in the one case and lung, lip and tongue motions in the other. But this is compatible with me being the cause - perhaps, I cause these immediate causes, or, perhaps it just id the case that some events can have both an agent and other events as their cause.
All this is suggestive, but not really adequate. we do not understand the intended force of ‘I am the cause’ and more than we understand the intended force of ‘Snoring is not something I do’. If I trip and fall in your flower garden, ‘I am the cause’ of any resulting damage, but neither the damage nor my fall is my action.
In the considerations for which we approach to explaining what are actions, as contrasted with ‘mere’ doings, are. However, it will be convenient to say something about how they are to be individuated.
If I say ‘Good morning’ to you over the telephone, I have acted. But how many actions have O performed, and how are they related to one another and associated events? we may describe of what is done:
(1) Mov e my tongue and lips in certain ways, while exhaling.
(2) sat ‘Good morning’.
(3) Cause a certain sequence of modifications in the current flowing in your telephone.
(4) Say ‘Good morning’ to you.
(5) greet you.
The list - not exhaustive, by any means - is of act types. I have performed an action of each relation holds. I greet you by saying ‘Good morning’ to you, but not the converse, and similarity for the others on the list. But are these five distinct actions I performed, one of each type, or are the five descriptions all of a single action, which was of these five (and more) types. Both positions, and a variety of intermediate positions have been defended.
How many words are there within the sentence? : ‘The cat is on the mat’? There are on course, at best two answers to this question, precisely because one can enumerate the word types, either for which there are five, or that which there are six. Moreover, depending on how one chooses to think of word types another answer is possible. Since the sentence contains definite articles, nouns, a preposition and a verb, there are four grammatical different types of word in the sentence.
The type/token distinction, understood as a distinction between sorts of things, particular, the identity theory asserts that mental states are physical states, and this raises the question whether the identity in question if of types or token’.
During the past two decades or so, the concept of supervenience has seen increasing service in philosophy of mind. The thesis that the mental is supervenient on the physical - roughly, the claim that the mental character of a thing is wholly determined by its physical nature - has played a key role in the formulation of some influence on the mind-body problem. Much of our evidence for mind-body supervenience seems to consist in our knowledge of specific correlations between mental states and physical (in particular, neural) processes in humans and other organisms. Such knowledge, although extersive and in some ways impressive, is still quite rudimentary and far from complete (what do we know, or can we expect to know about the exact neural substrate for, say, the sudden thought that you are late with your rent payment this month?) It may be that our willingness to accept mind-body supervenience, although based in part on specific psychological dependencies, has to be supported by a deeper metaphysical commitment to the primary of the physical: It may in fact be an expression of such a commitment.
However, there are kinds of mental state that raise special issues for mind-body supervenience. One such kind is ‘wide content’ states, i.e., contentful mental states that seem to be individuated essentially by reference to objects and events outside the subject, e.g., the notion of a concept, like the related notion of meaning. The word ‘concept’ itself is applied to a bewildering assortment of phenomena commonly thought to be constituents of thought. These include internal mental representations, images, words, stereotypes, senses, properties, reasoning and discrimination abilities, mathematical functions. Given the lack of anything like a settled theory in this area, it would be a mistake to fasten readily on any one of these phenomena as the unproblematic referent of the term. One does better to make a survey of the geography of the area and gain some idea of how these phenomena might fit together, leaving aside for the nonce just which of them deserve to be called ‘concepts’ as ordinarily understood.
Concepts are the constituents of such propositions, just as the words ‘capitalist’, ‘exploit’, and ‘workers’ are constituents of the sentence. However, there is a specific role that concepts are arguably intended to play that may serve a point of departure. Suppose one person thinks that capitalists exploit workers, and another that they do not. Call the thing that they disagree about ‘a proposition’, e.g., capitalists exploit workers. It is in some sense shared by them as the object of their disagreement, and it is expressed by the sentence that follows the verb ‘thinks that’ (mental verbs that take such verbs of ‘propositional attitude’. Nonetheless, these people could have these beliefs only if they had, inter alia, the concept’s capitalist. exploit. And workers.
Propositional attitudes, and thus concepts, are constitutive of the familiar form of explanation (so-called ‘intentional explanation’) by which we ordinarily explain the behaviour and stares of people, many animals and perhaps, some machines. The concept of intentionality was originally used by medieval scholastic philosophers. It was reintroduced into European philosophy b y the German philosopher and psychologist Franz Clemens Brentano (1838-1917) whose thesis proposed in Brentano’s ‘Psychology from an Empirical Standpoint’(1874) that it is the ‘intentionality or directedness of mental states that marks off the mental from the physical.
Many mental states and activities exhibit the feature of intentionality, being directed at objects. Two related things are meant by this. First, when one desire or believes or hopes, one always desires or believes of hopes something. As, to assume that belief report (1) is true.
(1) That most Canadians believe that George Bush is a Republican.
Tenet (1) tells us that a subject ‘Canadians’ have a certain attitude, belief, to something, designated by the nominal phrase that George Bush is a Republican and identified by its content-sentence.
(2) George Bush is a Republican.
Following Russell and contemporary usage that the object referred to by the that-clause is tenet (1) and expressed by tenet (2) a proposition. Notice, too, that this sentence might also serve as most Canadians’ belief-text, a sentence whereby to express the belief that (1) reports to have. Such an utterance of (2) by itself would assert the truth of the proposition it expresses, but as part of (1) its role is not to assert anything, but to identify what the subject believes. This same proposition can be the object of other attitude s of other people. However, in that most Canadians may regret that Bush is a Republican yet, Reagan may remember that he is. Bushanan may doubt that he is.
Nevertheless, Brentano, 1960, we can focus on two puzzles about the structure of intentional states and activities, an area in which the philosophy of mind meets the philosophy of language, logic and ontology, least of mention, the term intentionality should not be confused with terms intention and intension. There is, nonetheless, an important connection between intention and intension and intentionality, for semantical systems, like extensional model theory, that are limited to extensions, cannot provide plausible accounts of the language of intentionality.
The attitudes are philosophically puzzling because it is not easy to see how the intentionality of the attitude fits with another conception of them, as local mental phenomena.
Beliefs, desires, hopes, and fears seem to be located in the heads or minds of the people that have them. Our attitudes are accessible to us through ‘introspection’. As most Canadians belief that Bush to be a Republican just by examining the ‘contents’ of his own mind: He does not need to investigate the world around him. we think of attitudes as being caused at certain times by events that impinge on the subject’s body, specially by perceptual events, such as reading a newspaper or seeing a picture of an ice-cream cone. In that, the psychological level of descriptions carries with it a mode of explanation which has no echo in ‘physical theory’. we regard ourselves and of each other as ‘rational purposive creatures, fitting our beliefs to the world as we inherently perceive it and seeking to obtain what we desire in the light of them’. Reason-giving explanations can be offered not only for action and beliefs, which will gain most attention, however, desires, intentions, hopes, dears, angers, and affections, and so forth. Indeed, their positioning within a network of rationalizing links is part of the individuating characteristics of this range of psychological states and the intentional acts they explain.
Meanwhile, these attitudes can in turn cause changes in other mental phenomena, and eventually in the observable behaviour of the subject. Seeing a picture of an ice cream cone leads to a desire for one, which leads me to forget the meeting I am supposed to attend and walk to the ice-cream pallor instead. All of this seems to require that attitudes be states and activities that are localized in the subject.
Nonetheless, the phenomena of intentionality suggests that the attitudes are essentially relational in nature: They involve relations to the propositions at which they are directed and at the objects they are about. These objects may be quite remote from the minds of subjects. An attitude seems to be individuated by the agent, the type of attitude (belief, desire, and so on), and the proposition at which it is directed. It seems essential to the attitude reported by its believing that, for example, that it is directed toward the proposition that Bush is a Republican. And it seems essential to this proposition that it is about Bush. But how can a mental state or activity of a person essentially involve some other individuals? The problem is brought out by two classical problems such that are called ‘no-reference’ and ‘co-reference’.
The classical solution to such problems is to suppose that intentional states are only indirectly related to concrete particulars, like George Bush, whose existence is contingent, and that can be thought about in a variety of ways. The attitudes directly involve abstract objects of some sort, whose existence is necessary, and whose nature the mind can directly grasp. These abstract objects [provide concepts or ways of thinking of concrete particulars. That is to say, the involving characteristics of the different concepts, as, these, concepts corresponding to different inferential/practical roles in that different perceptions and memories give rise to these beliefs, and they serve as reasons for different actions. If we individuate propositions by concepts than individuals, the co-reference problem disappears.
The proposal has the bonus of also taking care of the no-reference problem. Some propositions will contain concepts that are not, in fact, of anything. These propositions can still be believed desired, and the like.
This basic idea has been worked out in different ways by a number of authors. The Austrian philosopher Ernst Mally thought that propositions involved abstract particulars that ‘encoded’ properties, like being the loser of the 1992 election, rather than concrete particulars, like Bush, who exemplified them. There are abstract particulars that encode clusters of properties that nothing exemplifies, and two abstract objects can encode different clusters of properties that are exemplified by a single thing. The German philosopher Gottlob Frége distinguished between the ‘sense’ and the ‘reference’ of expressions. The senses of George Bus hh and the person who will come in second in the election are different, even though the references are the same. Senses are grasped by the mind, are directly involved in propositions, and incorporate ‘modes of presentation’ of objects.
For most of the twentieth century, the most influential approach was that of the British philosopher Bertrand Russell. Russell (19051929) in effect recognized two kinds of propositions that assemble of a ‘singular proposition’ that consists separately in particular to properties in relation to that. An example is a proposition consisting of Bush and the properties of being a Republican. ‘General propositions’ involve only universals. The general proposition corresponding to someone is a Republican would be a complex consisting of the property of being a Republican and the higher-order property of being instantiated. (The term ‘singular proposition’ and ‘general proposition’ are from Kaplan (1989.)
Historically, a great deal has been asked of concepts. As shareable constituents of the object of attitudes, they presumably figure in cognitive generalizations and explanations of animals’ capacities and behaviour. They are also presumed to serve as the meaning of linguistic items, underwriting relations of translation, definition, synonymy, antinomy and semantic implication. Much work in the semantics of natural language takes itself to be addressing conceptual structure.
Concepts have also been thought to be the proper objects of philosophical analysis, the activity practised by Socrates and twentieth-century ‘analytic’ philosophers when they ask about the nature of justice, knowledge or piety, and expect to discover answers by means of priori reflection alone.
The expectation that one sort of thing could serve all these tasks went hand in hand with what has come to be known for the ‘Classical View’ of concepts, according to which they have an ‘analysis’ consisting of conditions that are individually necessary and jointly sufficient for their satisfaction. Which are known to any competent user of them? The standard example is the especially simple one of the [bachelor], which seems to be identical to [eligible unmarried male]. A more interesting, but problematic one has been [knowledge], whose analysis was traditionally thought to be [justified true belief].
This Classical View seems to offer an illuminating answer to a certain form of metaphysical question: In virtue of what is something the kind of thing is, -, e.g., in virtue of what is a bachelor a bachelor? - and it does so in a way that supports counterfactuals: It tells us what would satisfy the concept in situations other than the actual ones (although all actual bachelors might turn out to be freckled. It’s possible that there might be unfreckled ones, since the analysis does not exclude that). The View also seems to offer an answer to an epistemological question of how people seem to know a priori (or, independently of experience) about the nature of many things, e.g., that bachelors are unmarried: It is constitutive of the competency (or, possession) conditions of a concept that they know its analysis, at least on reflection.
As it had been ascribed, in that Actions as Doings having Mentalistic Explanation: Coughing is sometimes like snoring and sometimes like saying ‘Good morning’ - that is, sometimes in mere doing and sometimes an action. And deliberate coughing can be explained by invoking an intention to cough, a desired to cough or some other ‘pro-attitude’ toward coughing, a reason for coughing or purpose in coughing or something similarly mental. Especially if we think of actions as ‘outputs’ of the mental machine’. The functionalist thinks of ‘mental states’ as events as causally mediating between a subject’s sensory inputs and the subject’s ensuing behaviour. Functionalism itself is the stronger doctrine that ‘what makes’ a mental state the type of state it is - a pain, a smell of violets, a belief that koalas are dangerous - are the functional relations it bears to the subject’s perceptual stimuli, behaviour responses and other mental states.
Twentieth-century functionalism gained as credibility in an indirect way, by being perceived as affording the least objectionable solution to the mind-body problem.
Disaffected from Cartesian dualism and from the ‘first-person’ perspective of introspective psychology, the behaviourists had claimed that there is nothing to the mind but the subject’s behaviour and dispositions to behave. To refute the view that a certain level of behavioural dispositions is necessary for a mental life, we need convincing cases of thinking stones, or utterly incurable paralytics or disembodied minds. But these alleged possibilities are to some merely that.
To refute the view that a certain level of behavioural dispositions is sufficient for a mental life, we need convincing cases rich behaviour with no accompanying mental states. The typical example is of a puppet controlled by radio-wave links, by other minds outside the puppet’s hollow body. But one might wonder whether the dramatic devices are producing the anti-behaviorist intuition all by themselves. And how could the dramatic devices make a difference to the facts of the casse? If the puppeteers were replaced by a machine, not designed by anyone, yet storing a vast number of input-output conditionals, which was reduced in size and placed in the puppet’s head, do we still have a compelling counterexample, to the behaviour-as-sufficient view? At least it is not so clear.
Such an example would work equally well against the anti-eliminativist version of which the view that mental states supervene on behavioural disposition. But supervenient behaviourism could be refitted by something less ambitious. The ‘X-worlders’ of the American philosopher Hilary Putnam (1926-), who are in intense pain but do not betray this in their verbal or non-verbal behaviour, behaving just as pain-free human beings, would be the right sort of case. However, even if Putnam has produced a counterexample for pain - which the American philosopher of mind Daniel Clement Dennett (1942-), for one would doubtless deny - an ‘X-worlder’ story to refute supervenient behaviourism with respect to the attitudes or linguistic meaning will be less intuitively convincing. Behaviourist resistance is easier for the reason that having a belief or meaning a certain thing, lack distinctive phenomemologies.
There is a more sophisticated line of attack. As, the most influential American philosopher of the latter half of the 20th century philosopher Willard von Orman Quine (1908-2000) has remarked some have taken his thesis of the indeterminacy of translation as a reductio of his behaviourism. For this to be convincing, Quines argument for the indeterminacy thesis and to be persuasive in its own and that is a disputed matter.
If behaviourism is finally laid to rest to the satisfaction of most philosophers, it will probably not by counterexamples, or by a reductio from Quine’s indeterminacy thesis. Rather, it will be because the behaviorists worries about other minds, and the public availability of meaning have been shown to groundless, or not to require behaviourism for their solution. But we can be sure that this happy day will take some time to arrive.
Quine became noted for his claim that the way one uses’ language determines what kinds of things one is committed to saying exist. Moreover, the justification for speaking one way rather than another, just as the justification for adopting one conceptual system rather than another, was a thoroughly pragmatic one for Quine (see Pragmatism). He also became known for his criticism of the traditional distinction between synthetic statements (empirical, or factual, propositions) and analytic statements (necessarily true propositions). Quine made major contributions in set theory, a branch of mathematical logic concerned with the relationship between classes. His published works include Mathematical Logic (1940), From a Logical Point of View (1953), Word and Object (1960), Set Theory and Its Logic (1963), and: An Intermittently Philosophical Dictionary (1987). His autobiography, The Time of My Life, appeared in 1985.
Functionalism, and cognitive psychology considered as a complete theory of human thought, inherited some of the same difficulties that earlier beset behaviouralism and identity theory. These remaining obstacles fall unto two main categories: Intentionality problems and Qualia problems.
Propositional attitudes such as beliefs and desires are directed upon states of affairs which may or may not actually obtain, e.g., that the Republican or let alone any in the Liberal party will win, and are about individuals who may or may not exist, e.g., King Arthur. Franz Brentano raised the question of how are purely physical entity or state could have the property of being ‘directed upon’ or about a non-existent state of affairs or object: That is not the sort of feature that ordinary, purely physical objects can have.
The standard functionalist reply is that propositional attitudes have Brentano’s feature because the internal physical states and events that realize them ‘represent’ actual or possible states of affairs. What they represent is determined at least in part, by their functional roles: Is that, mental events, states or processes with content involve reference to objects, properties or relations, such as a mental state with content can fail to refer, but there always exists a specific condition for a state with content to refer to certain things? As when the state gas a correctness or fulfilment condition, its correctness is determined by whether its referents have the properties the content specifies for them.
What is it that distinguishes items that serve as representations from other objects or events? And what distinguishes the various kinds of symbols from each other? Firstly, there has been general agreement that the basic notion of a representation involves one thing’s ‘standing for’, ‘being about’, ‘pertain to’, ‘referring or denoting of something else entirely’. The major debates here have been over the nature of this connection between a representation and that which it represents. As to the second, perhaps the most famous and extensive attempt to organize and differentiated among alternative forms of the representation is found in the works of C.S. Peirce (1931-1935). Peirce’s theory of sign in complex, involving a number of concepts and distinctions that are no longer paid much heed. The aspect of his theory that remains influential and is widely cited, is his division of signs into Icons, Indices and Symbols. Icons are signs that are said to be like or resemble the things they represent, e.g., portrait paintings. Indices are signs that are connected to their objects by some causal dependency, e.g., smoke as a sign of fire. Symbols are those signs that are related to their object by virtue of use or association: They are arbitrary labels, e.g., the word ‘table’. The divisions among signs, or variants of this division, is routinely put forth to explain differences in the way representational systems are thought to establish their links to the world. Further, placing a representation in one of the three divisions has been used to account for the supposed differences between conventional and non-conventional representation, between representation that do and do not require learning to understand, and between representations, like language, that need to be read, and those which do not require interpretation. Some theorists, moreover, have maintained that it is only the use of Symbols that exhibits or indicate s the presence of mind and mental states.
Representations, along with mental states, especially beliefs and thoughts, are said to exhibit ‘intentionality’ in that they refer or to stand for something else. The nature of this special property, however, has seemed puzzling. Not only is intentionality often assumed to be limited to humans, and possibly a few other species, but the property itself appears to resist characterization in physicalist terms. The problem is most obvious in the case of ‘arbitrary’ signs, like words. Where it is clear that there is no connection between the physical properties of as word and what it denotes, that, wherein, the problem also remains for Iconic representation.
In at least, there are two difficulties. One is that of saying exactly ‘how’ a physical item’s representational content is determined, in not by the virtue of what does a neurophysiological state represent precisely that the available candidate will win? An answer to that general question is what the American philosopher of mind, Alan Jerry Fodor (1935-) has called a ‘psychosemantics’, and several attempts have been made. Taking the analogy between thought and computation seriously, Fodor believes that mental representations should be conceived as individual states with their own identities and structures, like formulae transformed by processes of computations or thought. His views are frequently contrasted with those of ‘holiest’ such as the American philosopher Herbert Donald Davidson (1917-2003), whose constructions within a generally ‘holistic’ theory of knowledge and meaning. Radical interpreter can tell when a subject holds a sentence true, and using the principle of ‘clarity’ ends up making an assignment of truth condition is a defender of radical translation and the inscrutability of reference’, Holist approach has seemed to many has seemed to many to offer some hope of identifying meaning as a respectable notion, eve n within a broadly ‘extensional’ approach to language. Instructionalists about mental ascription, such as Clement Daniel Dennett (19420) who posits the particularity that Dennett has also been a major force in illuminating how the philosophy of mind needs to be informed by work in surrounding sciences.
In giving an account of what someone believes, does essential reference have to be made to how things are in the environment of the believer? And, if so, exactly what reflation does the environment have to the belief? These questions involve taking sides in the externalism and internalism debate. To a first approximation, the externalist holds that one’s propositional attitude cannot be characterized without reference to the disposition of object and properties in the world - the environment - in which in is simulated. The internalist thinks that propositional attitudes (especially belief) must be characterizable without such reference. The reason that this is only a first approximation of the contrast is that there can be different sorts of externalism. Thus, one sort of externalist might insist that you could not have, say, a belief that grass is green unless it could be shown that there was some relation between you, the believer, and grass. Had you never come across the plant which makes up lawns and meadows, beliefs about grass would not be available to you. However, this does not mean that you have to be in the presence of grass in order to entertain a belief about it, nor does it even mean that there was necessarily a time when you were in its presence. For example, it might have been the case that, though you have never seen grass, it has been described to you. Or, at the extreme, perhaps no longer exists anywhere in the environment, but you ancestor’s contact with it left some sort of genetic trace in you, and the trace is sufficient to give rise to a mental state that could be characterized as about grass.
At the more specific level that has been the focus in recent years: What do thoughts have in common in virtue of which they are thoughts? What is, what makes a thought a thought? What makes a pain a pain? Cartesian dualism said the ultimate nature of the mental was to be found in a special mental substance. Behaviourism identified mental states with behavioural disposition: Physicalism in its most influential version identifies mental states with brain states. One could imagine that the individual states that occupy the relevant causal roles turn out not to be bodily stares: For example, they might instead be states of an Cartesian unextended substance. But its overwhelming likely that the states that do occupy those causal roles are all tokens of bodily-state types. However, a problem does seem to arise about properties of mental states. Suppose ‘pain’ is identical with a certain firing of c-fibres. Although a particular pain is the very same state as a neural firing, we identify that state in two different ways: As a pain and as a neural firing. The state will therefore have certain properties in virtue of which we identify it as a pain and others in virtue of which we identify it as a pain will be mental properties, whereas those in virtue of which we identify it as a neural firing will be physical properties. This has seemed to many to lead to a kind of dualism at the level of the properties of mental states. Even if we reject a dualism of substances and take people simply to be physical organisms, those organisms still have both mental and physical states. Similarly, even if we identify those mental states with certain physical states, those states will nonetheless have both mental and physical properties. So, disallowing dualism with respect to substances and their stares simply leads to its reappearance at the level of the properties of those states.
The problem concerning mental properties is widely thought to be most pressing for sensations, since the painful quality of pains and the red quality of visual sensation seem to be irretrievably physical. So, even if mental states are all identical with physical states, these states appear to have properties that are not physical. And if mental states do actually have non-physical properties, the identity of mental with physical states would not sustain a thoroughgoing mind-body materialism.
A more sophisticated reply to the difficulty about mental properties is due independently to D.M. Armstrong (1968) and David Lewis (1972), who argue that for a state to be a particular sort of intentional state or sensation is for that state to bear characteristic causal relations to other particular occurrences. The properties in virtue of which we identify states as thoughts or sensations will still be neutral as between being mental or physical, since anything can bear a causal relation to anything else. But causal connections have a better chance than similarity in some unspecified respect t of capturing the distinguishing properties of sensation and thoughts.
It should be mentioned that the properties can be more complex and complicating than the above allows. For instance, in the sentence, ‘John is married to Mary’, we are attributing to John the property of being married. And, unlike the property of being bald, this property of John is essentially relational. Moreover, it is commonly said that ‘is married to’ expresses a relation, than a property, though the terminology is not fixed, but, some authors speak of relations as different from properties in being more complex but like them in being non-linguistic, though it is more common to treat relations as a sub-class of properties.
The Classical view, meanwhile, has always had to face the difficulty of ‘primitive’ concepts: It’s all well and good to claim that competence consists in some sort of mastery of a definition, but what about the primitive concepts in which a process of definition must ultimately end? There the British Empiricism of the seventeenth century began to offer a solution: All the primitives were sensory. Indeed, they expanded the Classical view to include the claim, now often taken uncritically for granted in discussions of that view, that all concepts are ‘derived from experience’: ‘Every idea is derived from a corresponding impression’. In the work of John Locke (1682-1704), George Berkeley (1685-1753) and David Hume (1711-76) as it was thought to mean that concepts were somehow ‘composed’ of introspectible mental items - images -, ‘impressions’ - that were ultimately decomposable into basic sensory parts. Thuds, Hume analyzed the concept of [material object] as involving certain regularities in our sensory experience, and [cause] as involving conjunction.
Berkeley noticed a problem with this approach that every generation has had to rediscover: If a concept is a sensory impression, like an image, then how does one distinguish a general concept [triangle] from a more particular one - say, [isosceles triangle] - that would serve in imaging the general one. More recent, Wittgenstein (1953) called attention to the multiple ambiguity of images. And, in any case, images seem quite hopeless for capturing the concepts associated with logical terms (what is the image for negation or possibility?) Whatever the role of such representation, full conceptual competence must involve something more.
Indeed, in addition to images and impressions and other sensory items, a full account of concepts needs to consider issues of logical structure. This is precisely what ‘logical postivists’ did, focussing on logically structured sentences instead of sensations and images, transforming the empiricalist claim into the famous’ Verifiability Theory of Meaning’: The meaning of a sentence is the means by which it is confirmed or refuted. Ultimately by sensory experience, the meaning or concept associated with a predicate is the means by which people confirm or refute whether something satisfies it.
This once-popular position has come under much attack in philosophy in the last fifty years. In the first place, few, if any, successful ‘reductions’ of ordinary concepts like, [material objects], [cause] to purely sensory concepts have ever been achieved, as Jules Alfred Ayer (1910-89) proved to be one of the most important modern epistemologists, his first and most famous book, ‘Language, Truth and Logic’, to the extent that epistemology is concerned with the a priori justification of our ordinary or scientific beliefs, since the validity of such beliefs ‘is an empirical matter, which cannot be settled by such means. However, he does take positions which have been bearing on epistemology. For example, he is a phenomenalists, believing that material objects are logical constructions out of actual and possible sense-experience, and an anti-foundationalism, at least in one sense, denying that there is a bedrock level of indubitable propositions on which empirical knowledge can be based. As regards the main specifically epistemological problem he addressed, the problem of our knowledge of other minds, he is essentially behaviouristic, since the verification principle pronounces ]that the hypothesis of the occurrences intrinsically inaccessible experience is unintelligible.
Although his views were later modified, he early maintained that all meaningful statements are either logical or empirical. According to his principle of verification, a statement is considered empirical only if some sensory observation is relevant to determining its truth or falseness. Sentences that are neither logical nor empirical - including traditional religious, metaphysical, and ethical sentences - are judged nonsensical. Other works of Ayer include The Problem of Knowledge (1956), the Gifford Lectures of 1972-73 published as The Central Questions of Philosophy (1973), and Part of My Life: The Memoirs of a Philosopher (1977).
Ayer’s main contribution to epistemology are in his book, ‘The Problem of Knowledge’ which he himself regarded as superior to ‘Language, Truth and Logic’ (Ayer 1985), soon there after Ayer develops a fallibilist type of foundationalism, according to which processes of justification or verification terminate in someone’s having an experience, but there is no class of infallible statements based on such experiences. Consequently, in making such statements based on experience, even simple reports of observation we ‘make what appears to be a special sort of advance beyond our data’ (1956). And it is the resulting gap which the sceptic exploits. Ayer describes four possible responses to the sceptic: Naïve realism, according to which materia l objects are directly given in perception, so that there is no advance beyond the data: Reductionism, according to which physical objects are logically constructed out of the contents of our sense-experiences, so that again there is no real advance beyond the data: A position according to which there is an advance, but it can be supported by the canons of valid inductive reasoning and lastly a position called ‘descriptive analysis’, according to which ‘we can give an account of the procedures that we actually follow . . . but there [cannot] be a proof that what we take to be good evidence really is so’.
Ayer’s reason why our sense-experiences afford us grounds for believing in the existence of physical objects is simply that sentence which are taken as referring to physical objects are used in such a way that our having the appropriate experiences counts in favour of their truths. In other words, having such experiences is exactly what justification of or ordinary beliefs about the nature of the world ‘consists in’. This suggestion is, therefore, that the sceptic is making some kind of mistake or indulging in some sort of incoherence in supposing that our experience may not rationally justify our commonsense picture of what the world is like. Again, this, however, is the familiar fact that th sceptic’s undermining hypotheses seem perfectly intelligible and even epistemically possible. Ayer’s response seems weak relative to the power of the sceptical puzzles.
The concept of ‘the given’ refers to the immediate apprehension of the contents of sense experience, expressed in the first person, present tense reports of appearances. Apprehension of the given is seen as immediate both in a casual sense, since it lacks the usual causal chain involved in perceiving real qualities of physical objects, and in an epistemic sense, since judgements expressing it are justified independently of all other beliefs and evidence. Some proponents of the idea of the given maintain that its apprehension is absolutely certain: Infallible, incorrigible and indubitable. It has been claimed also that a subject is omniscient with regard to the given: If a property appears, then the subject knows this.
The doctrine dates back at least to Descartes, who argued in Meditation II that it was beyond all possible doubt and error that he seemed to see light, hear noise, and so forth. The empiricist added the claim that the mind is passive in receiving sense impressions, so that there is no subjective contamination or distortion here (even though the states apprehended are mental). The idea was taken up in twentieth-century epistemology by C.I. Lewis and A.J. Ayer. Among others, who appealed to the given as the foundation for all empirical knowledge. Nonetheless, empiricism, like any philosophical movement, is often challenged to show how its claims about the structure of knowledge and meaning can themselves be intelligible and known within the constraints it accepts, since beliefs expressing only the given were held to be certain and justified in themselves, they could serve as solid foundations.
The second argument for the need for foundations is sound. It appeals to the possibility of incompatible but fully coherent systems of belief, only one of which could be completely true. In light of this possibility, coherence cannot suffice for complete justification, as coherence has the power to produce justification, while according to a negative coherence theory, coherence has only the power to nullify justification. However, by contrast, justification is solely a matter of how a belief coheres with a system of beliefs. Nonetheless, another distinction that cuts across the distinction between weak and strong coherence theories of justification. It is the distinction between positive and negative coherence theory tells us that if a belief coheres with a background system of belief, then the belief is justified.
Coherence theories of justification have a common feature, namely, that they are what are called ‘internalistic theories of justification’ they are theories affirming that coherence is a matter of internal relations between beliefs and justification is a matter of coherence. If, then, justification is solely a matter of internal relations between beliefs, we are left with the possibility that the internal relations might fail to correspond with any external reality. How, one might object, can a completely internal subjective notion of justification bridge the gap between mere true belief, which might be no more than a lucky guess, and knowledge, which must be grounded in some connection between internal subjective condition and external objective realities?
The answer is that it cannot and that something more than justified true belief is required for knowledge. This result has, however, been established quite apart from considerations of coherence theories of justification. What is required may be put by saying that the justification one must be undefeated by errors in the background system of belief. A justification is undefeated by error in the background system of belied would sustain the justification of the belief on the basis of the corrected system. So knowledge, on this sort of positive coherence theory, is true belief that coheres with the background belief system and corrected versions of that system. In short, knowledge is true belief plus justification resulting from coherence and undefeated by error.
Without some independent indication that some of the beliefs within a coherent system are true, coherence in itself is no indication of truth. Fairy stories can cohere. But our criteria for justification must indicate to us the probable truth of our beliefs. Hence, within any system of beliefs there must be some privileged class of beliefs which others must cohere to be justified. In the case of empirical knowledge, such privileged beliefs must represent the point of contact between subject and the world: They must originate in perception. When challenged, however, we justify our ordinary perceptual beliefs about physical properties by appeal to beliefs about appearances. Nonetheless, it seems more suitable as foundations since there is no class of more certain perceptual beliefs to which we appeal for their justification.
The argument that foundations must be certain was offered by the American philosopher David Lewis (1941-2002). He held that no proposition can be probable unless some are certain. If the probability of all propositions or beliefs were relative to evidence expressed in others, and if these relations were linear, then any regress would apparently have to terminate in propositions or beliefs that are certain. But Lewis shows neither that such relations must be linear nor that regresses cannot terminate in beliefs that are merely probable or justified in themselves without being certain or infallible.
Arguments against the idea of the given originate with the German philosopher and founder of critical philosophy. Immanuel Kant (1724-1804), whereby the intellectual landscape in which Kant began his career was largely set by the German philosopher, mathematician and polymath of Gottfried Wilhelm Leibniz (1646-1716), filtered through the principal follower and interpreter of Leibniz, Christian Wolff, who was primarily a mathematician but renowned as a systematic philosopher. Kant, who argues in Book I to the Transcendental Analysis that percepts without concepts do not yet constitute any form of knowing. Being non-epistemic, they presumably cannot serve as epistemic foundations. Once we recognize that we must apply concepts of properties to appearances and formulate beliefs utilizing those concepts before the appearances can play any epistemic role. It becomes more plausible that such beliefs are fallible. The argument was developed in this century by Sellars (1912-89), whose work revolved around the difficulties of combining the scientific image of people and their world, with the manifest image, or natural conception of ourselves as acquainted with intentions, meaning, colours, and other definitive aspects by his most influential paper ‘Empiricism and the Philosophy of Mind’ (1956) in this and many other of his papers, Sellars explored the nature of thought and experience. According to Sellars (1963), the idea of the given involves a confusion between sensing particular (having sense impression) which is non-epistemic, and having non-inferential knowledge of propositions referring to appearances be necessary for acquiring perceptual knowledge, but it is itself a primitive kind of knowing. Its being non-epistemic renders it immune from error, also, unsuitable for epistemological foundations. The apparentness to the non-inferential perceptual knowledge, is fallible, requiring concepts acquired through trained responses to public physical objects.
The contention that even reports of appearances are fallible can be supported from several directions. First, it seems doubtful that we can look beyond our beliefs to compare them with an unconceptualized reality, whether mental of physical. Second, to judge that anything, including an appearance, is ‘F’, we must remember which property ‘F’ is, and memory is admitted by all to be fallible. Our ascribing ‘F’ is normally not explicitly comparative, but its correctness requires memory, nevertheless, at least if we intend to ascribe a reinstantiable property. we must apply the concept of ‘F’ consistently, and it seems always at least logically possible to apply it inconsistently. If that be, it is not possible, if, for example, I intend in tendering to an appearance e merely to pick out demonstratively whatever property appears, then, I seem not to be expressing a genuine belief. My apprehension of the appearance will not justify any other beliefs. Once more it will be unsuitable as an epistemological foundation. This, nonetheless, nondifferential perceptual knowledge, is fallible, requiring concepts acquiring through trained responses to public physical objects.
Ayer (1950) sought to distinguish propositions expressing the given not by their infallibility, but by the alleged fact that grasping their meaning suffices for knowing their truth. However, this will be so only if the purely demonstratives meaning, and so only if the propositions fail to express beliefs that could ground others. If in uses genuine predicates, for example: C≠ as applied to tones, then one may grasp their meaning and yet be unsure in their application to appearances. Limiting claims of error in claims eliminates one major source of error in claims about physical objects - appearances cannot appear other than they are. Ayer’s requirement of grasping meaning eliminates a second source of error, conceptual confusion. But a third major source, misclassification, is genuine and can obtain in this limited domain, even when Ayer ‘s requirement is satisfied.
Any proponent to the given faces the dilemma that if in terms used in statements expressing its apprehension are purely demonstrative, then such statements, assuming they are statements, are certain, but fail to express beliefs that could serve as foundations for knowledge. If what is expressed is not awareness of genuine properties, then awareness does not justify its subject in believing anything else. However, if statements about what appears use genuine predicates that apply to reinstantiable properties, then beliefs expressed cannot be infallible or knowledge. Coherentists would add that such genuine belief’s stand in need of justification themselves and so cannot be foundations.
Contemporary foundationalist deny the coherent’s claim while eschewing the claim that foundations, in the form of reports about appearances, are infallible. They seek alternatives to the given as foundations. Although arguments against infallibility are strong, other objections to the idea of foundations are not. That concepts of objective properties are learned prior to concepts of appearances, for example, implies neither that claims about objective properties, nor that the latter are prior in chains of justification. That there can be no knowledge prior to the acquisition and consistent application of concepts allows for propositions whose truth requires only consistent application of concepts, and this may be so for some claims about appearances.
Coherentist will claim that a subject requires evidence that he apply concepts consistently to distinguish red from other colours that appear. Beliefs about red appearances could not then be justified independently of other beliefs expressing that evidence. Save that to part of the doctrine of the given that holds beliefs about appearances to be self-justified, we require an account of how such justification is possible, how some beliefs about appearances can be justified without appeal to evidence. Some foundationalist’s simply assert such warrant as derived from experience but, unlike, appeals to certainty by proponents of the given, this assertion seem ad hoc.
A better strategy is to tie an account of self-justification to a broader exposition of epistemic warrant. On such accounts sees justification as a kind of inference to the best explanation. A belief is shown to be justified if its truth is shown to be part of the best explanation for why it is held. A belief is self-justified if the best explanation for it is its truth alone. The best explanation for the belief that I am appeared to redly may be that I am. Such accounts seek ground knowledge in perceptual experience without appealing to an infallible given, now universally dismissed.
Nonetheless, it goes without saying, that many problems concerning scientific change have been clarified, and many new answers suggested. Nevertheless, concepts central to it (like ‘paradigm’. ‘core’, problem’, ‘constraint’, ‘verisimilitude’) many devastating criticisms of the doctrine based on them have been answered satisfactorily.
Problems centrally important for the analysis of scientific change have been neglected. There are, for instance, lingering echoes of logical empiricism in claims that the methods and goals of science are unchanging, and thus are independent of scientific change itself, or that if they do change, they do so for reasons independent of those involved in substantive scientific change itself. By their very nature, such approaches fail to address the changes that actually occur in science. For example, even supposing that science ultimately seeks the general and unaltered goal of ‘truth’ or ‘verisimilitude’, that injunction itself gives no guidance as to what scientists should seek or how they should go about seeking it. More specific scientific goals do provide guidance, and, as the transition from mechanistic to gauge-theoretic goals illustrates, those goals are often altered in light of discoveries about what is achievable, or about what kinds of theories are promising. A theory of scientific change should account for these kinds of goal changes, and for how, once accepted, they alter the rest of the patterns of scientific reasoning and change, including ways in which more general goals and methods may be reconceived.
To declare scientific changes to be consequences of ‘observation’ or ‘experimental evidence’ is again to overstress the superficially unchanging aspects of science. we must ask how what counts as observation, experiments, and evidence themselves alter in the light of newly accepted scientific beliefs. Likewise, it is now clear that scientific change cannot be understood in terms of dogmatically embraced holistic cores: The factors guiding scientific change are by no means the monolithic structure which they have been portrayed as being. Some writers prefer to speak of ‘background knowledge’ (or ‘information’) as shaping scientific change, the suggestion being that there are a variety of ways in which a variety of prior ideas influence scientific research in a variety of circumstances. But it is essential that any such complexity of influences be fully detailed, not left, as by the philosopher of science Raimund Karl Popper (1902-1994), with cursory treatment of a few functions selected to bolster a prior theory (in this case, falsification). Similarly, focus on ‘constraints’ can mislead, suggesting too negative a concept to do justice to the positive roles of the information utilized. Insofar as constraints are scientific and not trans-scientific, they are usually ‘functions’, not ‘types’ of scientific propositions.
Traditionally, philosophy has concerned itself with relations between propositions which are specifically relevant to one another in form or content. So viewed, a philosophical explanation of scientific change should appeal to factors which are clearly more scientifically relevant in their content to the specific directions of new scientific research and conclusions than are social factors whose overt relevance lies elsewhere. Nonetheless, in recent years many writers, especially in the ‘strong programme’ practices must be assimilated to social influences.
Such claims are excessive. Despite allegations that even what counted as evidence is a matter of mere negotiated agreement, many consider that the last word has not been said on the idea that there is in some deeply important sense of a ‘given’ to experience in terms with which we can, at least partially, judge theories (‘background information’) which can help guide those and other judgements. Even if ewe could, no information to account for what science should and can be, and certainly not for what it is often in human practice, neither should we take the criticism of it for granted, accepting that scientific change is explainable only by appeal to external factors.
Equally, we cannot accept too readily the assumption (another logical empiricist legacy) that our task is to explain science and its evolution by appeal to meta-scientific rules or goals, or metaphysical principles, arrived at in the light of purely philosophical analysis, and altered (if at all) by factors independent of substantive science. For such trans-scientific analysis, even while claiming to explain ‘what science is’, do so in terms ‘external’ to the processes by which science actually changes.
Externalist claims are premature by enough is yet understood about the roles of indisputably scientific considerations in shaping scientific change, including changes of methods and goals. Even if we ultimately cannot accept the traditional ‘internalist’ approach to philosophy of science, as philosophers concerned with the form and content of reasoning we must determine accurately how far it can be carried. For that task. Historical and contemporary case studies are necessary but insufficient: Too often the positive implications of such studies are left unclear, and their too hasty assumption is often that whatever lessons are generated therefrom apply equally to later science. Larger lessons need to be extracted from concrete studies. Further, such lessons must, there possible, be given a systematic account, integrating the revealed patterns of scientific reasoning and the ways they are altered into a coherent interpretation of the knowledge-seeking enterprise - a theory of scientific change. Whether such efforts are successful or not, or through understanding our failure to do so, that it will be possible to assess precisely the extent to which trans-scientific factors (meta-scientific, social, or otherwise) must be included in accounts of scientific change.
Much discussion of scientific change on or upon the distinction between contexts of discovery and justification that is to say about discovery that there is usually thought to be no authoritative confirmation theory, telling how bodies of evidence support, a hypothesis instead science proceeds by a ‘hypothetico-deductive method’ or ‘method of conjectures and refutations’. By contrast, early inductivists held that (1) science e begins with data collections (2) rules of inference are applied to the data to obtain a theoretical conclusion, or at least, to eliminate alternatives, and (3) that conclusion is established with high confidence or even proved conclusively by the rules. Rules of inductive reasoning were proposed by the English diplomat and philosopher Francis Bacon (1561-1626) and by the British mathematician and physicists and principal source of the classical scientific view of the world, Sir Isaac Newton (1642-1727) in th e second edition of the Principia (‘Rules of Reasoning in Philosophy’). Such procedures were allegedly applied in Newton’s ‘Opticks’ and in many eighteenth-century experimental studies of heat, light, electricity, and chemistry.
According to Laudan (1981), two gradual realizations led to rejection of this conception of scientific method: First, that inferences from facts to generalizations are not established with certain, hence sectists were more willing to consider hypotheses with little prior empirical grounding, Secondly, that explanatory concepts often go beyond sense experience, and that such trans-empirical concepts as ‘atom’ and ‘field’ can be introduced in the formulation of such hypothesis, thus, as the middle of the eighteenth century, the inductive conception began to be replaced by the middle of hypothesis, or hypothetico-deductive method. On the view, the other of events in science is seen as, first, introduction of a hypothesis and second, testing of observational production of that hypothesis against observational and experimental results.
Twentieth-century relativity and quantum mechanics alerted scientists even more to the potential depths of departures from common sense and earlier scientific ideas, e.g., quantum theory. Their attention was called from scientific change and direct toward an analysis of temporal ‘formal’ characteristics of science: The dynamical character of science,, emphasized by physics, was lost in a quest for unchanging characteristics deffinitary of science and its major components, i.e., ‘content’ of thought, the ‘meanings’ of fundamental ‘meta-scientific’ concepts and method-deductive conception of method, endorsed by logical empiricist, was likewise construed in these terms: ‘Discovery’, the introduction of new ideas, was grist for historians, psychologists or sociologists, whereas the ‘justification’ of scientific ideas was the application of logic and thus, the proper object of philosophy of science.
The fundamental tenet of logical empiricism is that the warrant for all scientific knowledge rests on or upon empirical evidence I conjunction with logic, where logic is taken to include induction or confirmation, as well as mathematics and formal logic. In the eighteenth century the work of the empiricist John Locke (1632-1704) had important implications for other social sciences. The rejection of innate ideas in book I of the Essay encouraged an emphasis on the empirical study of human societies, to discover just what explained their variety, and this toward the establishment of the science of social anthropology.
Induction (logic), in logic, is the process of drawing a conclusion about an object or event that has yet to be observed or occur, based on previous observations of similar objects or events. For example, after observing year after year that a certain kind of weed invades our yard in autumn, we may conclude that next autumn our yard will again be invaded by the weed; or having tested a large sample of coffee makers, only to find that each of them has a faulty fuse, we conclude that all the coffee makers in the batch are defective. In these cases we infer, or reach a conclusion based on observations. The observations or assumptions on which we base the inference - the annual appearance of the weed, or the sample of coffee makers with faulty fuses - form the premises or assumptions.
In an inductive inference, the premises provide evidence or support for the conclusion; this support can vary in strength. The argument’s strength depends on how likely it is that the conclusion will be true, assuming all of the premises to be true. If assuming the premises to be true makes it highly probable that the conclusion also would be true, the argument is inductively strong. If, however, the supposition that all the premises are true only slightly increases the probability that the conclusion will be true, the argument is inductively weak.
The truth or falsity of the premises or the conclusion is not at issue. The degree of strength instead depends on whether, and how much, the likelihood of the conclusion’s being true would increase if the premises were true. So, in induction, as in deduction, the emphasis is on the form of support that the premises provide to the conclusion. However, induction differs from deduction in a crucial aspect. In deduction, for an argument to be correct, if the premises were true, the conclusion would have to be true as well. In induction, however, even when an argument is inductively strong, the possibility remains that the premises are true and the conclusion false. To return to our examples, even though it is true that this weed has invaded our yard every year, it remains possible that the weed could die and never reappear. Likewise, it is true that all of the coffee makers tested had faulty fuses, but it is possible that the remainder of the coffee makers in the batch is not defective. Yet it is still correct, from an inductive point of view, to infer that the weed will return, and that the remainder of the coffee makers has faulty fuses.
Thus, strictly speaking, all inductive inferences are deductively invalid. Yet induction is not worthless; in both everyday reasoning and scientific reasoning regarding matters of fact - for instance in trying to establish general empirical laws - induction plays a central role. In an inductive inference, for example, we draw conclusions about an entire group of things, or a population, on the basis of data about a sample of that group or population; or we predict the occurrence of a future event on the basis of observations of similar past events; or we attribute a property to a non-observed thing on the grounds that all observed things of the same kind have that property; or we draw conclusions about causes of an illness based on observations of symptoms. Inductive inference is used in almost all fields, including education, psychology, physics, chemistry, biology, and sociology. Consequently, because the role of induction is so central in our processes of reasoning, the study of inductive inference is one of the major areas of concern in efforts to create computer models of human reasoning in Artificial Intelligence.
The development of inductive logic owes a great deal to 19th-century British philosopher John Stuart Mill, who studied different methods of reasoning and experimental inquiry in his work ‘A System of Logic’‘(1843), whereby Mill was chiefly interested in studying and classifying the different types of reasoning in which we start with observations of events and go on to infer the causes of those events. In, ‘A Treatise on Induction and Probability’ (1960), 20th-century Finnish philosopher Georg Henrik von Wright expounded the theoretical foundations of Mill’s methods of inquiry.
Philosophers have struggled with the question of what justification we have to take for granted induction’s common assumptions: that the future will follow the same patterns as the past; that a whole population will behave roughly like a randomly chosen sample; that the laws of nature governing causes and effects are uniform; or that we can presume that a sufficiently large number of observed objects give us grounds to attribute something to another object we have not yet observed. In short, what is the justification for induction itself? This question of justification, known as the problem of induction, was first raised by 18th-century Scottish philosopher David Hume in his An Enquiry Concerning Human Understanding (1748). While it is tempting to try to justify induction by pointing out that inductive reasoning is commonly used in both everyday life and science, and its conclusions are, by and large, proven to be correct, this justification is itself an induction and therefore it raises the same problem: Nothing guarantees that simply because induction has worked in the past it will continue to work in the future. The problem of induction raises important questions for the philosopher and logician whose concern it is to provide a basis of assessment of the correctness and the value of methods of reasoning.
In the eighteenth century, Lock’s empiricism and the science of Newton were, with reason, combined in people’s eyes to provide a paradigm of rational inquiry that, arguably, has never been entirely displaced. It emphasized the very limited scope of absolute certainties in the natural and social sciences, and more generally underlined the boundaries to certain knowledge that arise from our limited capacities for observation and reasoning. To that extent it provided an important foil to the exaggerated claims sometimes made for the natural sciences in the wake of Newton’s achievements in mathematical physics.
This appears to conflict strongly with Thomas Kuhn’s (1922-96) statement that scientific theory choice depends on considerations that go beyond observation and logic, even when logic is construed to include confirmation.
Nonetheless, it can be said, that, the state of science at any given time is characterized, in part, by the theories accepted then. Presently accepted theories include quantum theory, and general theory of relativity, and the modern synthesis of Darwin and Mendel, as well as lower-level, but still clearly theoretical assertions such as that DNA has a double-helical structure, that the hydrogen atom contains a single electron, and so firth. What precisely is involved in accepting a theory or factors in theory choice.
Many critics have been scornful of the philosophical preoccupation with under-determination, that a theory is supported by evidence only if it implies some observation categories. However, following the French physician Pierre Duhem, who is remembered philosophically for his La Thêorie physique, (1906), translated as, ‘The Aim and Structure of Science, is that it simply is a device for calculating science provides a deductive system that is systematic, economic and predicative: Following Duhem, Orman van Willard Quine (1918-2000), who points out that observation categories can seldom if ever be deduced from a single scientific theory taken by itself: Rather, the theory must be taken in conjunction with a whole lot of other hypotheses and background knowledge, which are usually not articulated in detail and may sometimes be quite difficult to specify. A theoretical sentence does not, in general, have any empirical content of its own. This doctrine is called ‘Holism’, which the basic term refers to a variety of positions that have in common a resistance to understanding large unities as merely the sum of their parts, and an insistence that we cannot explain or understand the parts without treating them as belonging to such larger wholes. Some of these issues concern explanation. It is argued, for example, that facts about social classes are not reducible to facts about the beliefs and actions of the agents who belong to them, or it is claimed that we only understand the actions of individuals by locating them in social roles or systems of social meanings.
But, whatever may be the case with under-determination, there is a very closely related problem that scientists certainly do face whenever two rival theories or more encompassing theoretical frameworks are competing for acceptance. This is the problem posed by the fact that one framework, usually the older, longer-established framework can accommodate, that is, produce post hoc explanation of particular pieces of evidence that seem intuitively to tell strongly in favour of the other (usually the new ‘revolutionary’) framework.
For example, the Newtonian particulate theory of light is often thought of as having been straightforwardly refuted by the outcome of experiments - like Young ‘s two-slit experiment - whose results were correctly predicted by the rival wave theory. Duhem’s (1906) analysis of theories and theory testing already shows that this cannot logically have been the case. The bare theory that light consists of some sort of material particle has no empirical consequence s in isolation from other assumptions: And it follows that there must always be assumptions that could be added to the bare corpuscular theory, such that some combined assumptions entail the correct result of any optical experiment. A d indeed, a little historical research soon reveals eighteenth and early nineteenth-century emissionists who suggested at least outline ways in which interference result s could be accommodated within the corpuscular framework. Brewster, for example, suggested that interference might be a physiological phenomenon: While Biot and others worked on the idea that the so-called interference fringes are produced by the peculiarities of the ‘diffracting forces’ that ordinary gross exerts on the light corpuscles.
Both suggestions ran into major conceptual problems. For example, the ‘diffracting force’ suggestion would not even come close to working with any forces of kinds that were taken to operate in other cases. Often the failure was qualitative: Given the properties of forces that were already known about, for example, it was expected that the diffracting force would depend in some way on the material properties of the diffracting object: But, whatever the material of the double-slit screen is Young’s experiment, and whatever its density, the outcome is the same. It could, of course, simply be assumed that the diffracting forces are an entirely novel kind, and that their properties just had to be ‘read-off’ the phenomena - this is exactly the way that corpusularists worked. But, given that this was simply a question of attemptive to write the phenomena into a favoured conceptual framework. And given that the writing-in produced complexities and incongruities for which there was no independent evidence, the majority view was that interference results strongly favour the wave theory, of which they are ‘natural’ consequences. (For example, that the material making up the double slit and its density have no effect at all on the phenomenon is a straightforward consequence of the fact that, as the wave theory says it, the only effect on the screen is to absorb those parts of the wave fronts that impinges on it.)
The natural methodological judgement (and the one that seems to have been made by the majority of competent scientists at that time) is that, even given the interference effects could be accommodated within the corpuscular theory, those effects nonetheless favour the wave account, and favour it in the epistemic sense of showing that theory to be more likely to be true. Of course, the account given by the wave theory of the interference phenomena is also, in certain senses, pragmatically simpler: But this seems generally to have been taken to be, not a virtue in itself, but a reflection of a deeper virtue connected with likely truth.
Consider a second, similar case: That of evolutionary theory and the fossil record. There are well-known disputes about which particular evolutionary account for most support from fossils. Nonetheless, the relative weight the fossil evidence carries for some sort of evolutionist account versus the special creationist theory, is yet well-known for its obviousness - in that the theory of special creation can accommodate fossils: A creationist just needs to claim that what the evolutionist thinks of as bones of animals belonging to extinct species, are, in fact, simply items that God chose to included in his catalogue of the universe’s content at creatures: What the evolutionist thinks of as imprints in the rocks of the skeletons of other such animals are they. It nonetheless surely still seems true intuitively that the fossil records continue to give us better reason to believe that species have evolved from earlier, now extinct ones, than that God created the universe much as it presently is in 4004 Bc. An empiricist-instrumentalist t approach seems committed to the view that, on the contrary, any preference that this evidence yields for the evolutionary account is a purely pragmatic matter.
Of course, intuitions, no matter how strong, cannot stand against strong counter arguments. Van Fraassen and other strong empiricists have produced arguments that purport to show that these intuitions are indeed misguided.
What justifies the acceptance e of a theory? Although h particular versions of empiricism have met many criticisms, that is, of support by the available evidence. How else could empiricists term? : In terms, that is, of support by the available evidence. How else could the objectivity of science be defended except by showing that its conclusion (and in particularly its theoretical conclusions - those theories? It presently on any other condition than that excluding exemplary base on which are somehow legitimately based on agreed observational and experimental evidence, yet, as well known, theoretics in general pose as a problem for empiricism.
Allowing the empiricist the assumptions that there are observational statements whose truth-values can be inter-subjectively agreeing. A definitive formulation of the classical view was finally provided by the German logical positivist Rudolf Carnap (1891-1970), combining a basic empiricism with the logical tools provided by Frége and Russell: And it is his work that the main achievements (and difficulties) of logical positivism are best exhibited. His first major works were Der Logische Aufban der welts (1928, translated as ‘The Logical Structure of the World, 1967) this phenomenological work attempts a reduction of all the objects of knowledge, by generating equivalence classes of sensations, related by a primitive relation of remembrance of similarity. This is the solipsistic basis of the construction of the external world, although Carnap later resisted the apparent metaphysical priority as given to experience. His hostility to metaphysics soon developed into the characteristic positivist view that metaphysical questions are pseudo-problems. Criticism from the Austrian philosopher and social theorist Otto Neurath (1882-1945) shifted Carnap’s interest toward a view of the unity of the sciences, with the concepts and theses of special sciences translatable into a basic physical vocabulary whose protocol statements describe not experience but the qualities of points in space-time. Carnap pursued the enterprise of clarifying the structures of mathematics and scientific language (the only legitimate task for scientific philosophy) in Logische Syntax fer Sprache (1943, translated as, ‘The Logical Syntax of Language’, 1937) refinements to his syntactic and semantic views continued with Meaning and Necessity (1947) while a general loosening of the original ideal of reduction culminated in the great Logical Foundations of Probability, the most important single work of ‘confirmation theory’, in 1950. Other works concern the structure of physics and the concept of entropy.
Wherefore, the observational terms were presumed to be given a complete empirical interpretation, which left the theoretical terms with only an ‘indirect’ empirical interpretation provided by their implicit definition within an axiom system in which some of the terms possessed a complete empirical interpretation.
Among the issues generated by Carnap’s formulation was the viability of ‘the theory-observation distinction’. Of course, one could always arbitrarily designate some subset of nonlogical terms as belonging to the observational vocabulary, however, that would compromise the relevance of the philosophical analysis for any understanding of the original scientific theory. But what could be the philosophical basis for drawing the distinction? Take the predicate ‘spherical’, for example. Anyone can observe that a billiard ball is spherical, but what about the moon, or an invisible speck of sand? Is the application of the term ‘spherical’ of these objects ‘observational’?
Another problem was more formal, as introduced of Craig’s theorem seemed to show that a theory reconstructed in the recommended fashion could be re-axiomatized in such a way as to dispense with all theoretical terms, while retaining all logical consequences involving only observational terms. Craig’s theorem in mathematical logic, held to have implications in the philosophy of science. The logician William Craig showed how, if we partition the vocabulary of a formal system (say, into the ‘T’ or theoretical terms, and the ‘O’ or observational terms), then if there is a fully ‘formalized’ system ‘T’ with some set ‘S’ of consequences containing only the ‘O’ terms, there is also a system ‘O’ containing only the ‘O’ vocabulary but strong enough to give the same set ‘S’ of consequences. The theorem is a purely formal one, in that ‘T’ and ‘O’ simply separate formulae into the preferred ones, containing non-logical terms only one kind of vocabulary, and the others. The theorem might encourage the thought that the theoretical terms of a scientific theory are in principle, dispensable, since the same consequences can be derived without them.
However, Craig’s actual procedure gives no effective way of dispensing with theoretical terms in advance, i.e., in the actual process of thinking about and designing the premises from which the set ‘S’ follows, in this sense ‘O’ remains parasitic upon its parent ‘T’.
Thus, as far as the ‘empirical’ content of a theory is concerned, it seems that we can do without the theoretical terms. Carnap’s version of the classical view seemed to imply a form of instrumentation. A problem which the German philosopher of science, Carl Gustav Hempel (1905-97) christened ‘the theoretician’s dilemma’.
Meanwhile Descartes identification of matter with extension, and his comitans theory of all of space as filed by a plenum of matter. The great metaphysical debate over the nature of space and time has its roots in the scientific revolution of the sixteenth and seventeenth centuries. An early contribution to the debate was the French mathematician and founding father of modern philosophy, Réne Descartes (1596-1650). His interest in the methodology of a unified science culminated in his first work, the Regulae ad Directionem Ingenti (1628/9), was never completed. Nonetheless, between 1628 and 1649, Descartes first wrote and then cautiously suppressed, Le Monde (1634) and in 1637 produced the Discours de la Méthode as a preface to the treatise on mathematics and physics in which he introduced the notion of Cartesian coordinates.
His best known philosophical work, the Meditationes de Prima Philosophia (Meditations of First Philosophy), together with objections by distinguished contemporaries and relies by Descartes (the Objections and Replies) appeared in 1641. The author of the objections is First advanced, by the Dutch theologian Johan de Kater, second set, Mersenne, third set, Hobbes: Fourth set, Arnauld, fifth set, Gassendim, and sixth set, Mersnne. The second edition (1642) of the Meditations included a seventh set by the Jesuit Pierre Bourdin. Descartes’s penultimate work, the Principia Philosophiae (Principles of Philosophy) of 1644 was designed partly for use in theological textbooks: His last work was Les Passions de I áme (the Passions of the Soul) and published in 1649. In that year Descartes visited the court of Kristina of Sweden, where he contracted pneumonia, allegedly through being required to break his normal habit of a late rising in order to give lessons at 5:00 a.m. His last words are supposed ‘Ça, mon sme il faut partur’, - ‘So my soul, it is time to part’.
It is nonetheless said, that the great metaphysical debate over the nature of space and time has its roots in the scientific revolution of the sixteenth and seventeenth centuries. An early contribution to the debate was Réne Descartes’s (1596-1650), identification of matter with extension, and his comitant theory of all of space as filled by a plenum of matter.
Far more profound was the German philosopher, mathematician and polymath, Wilhelm Gottfried Leibniz (1646-1716), whose characterization of a full-blooded theory of relationism with regard to space and time, as Leibniz elegantly puts his view: ‘Space is nothing but the order of coexistence . . . time is the order of inconsistent t possibilities’. Space was taken to be a set of relations among material objects. The deeper monadological view to the side, were the substantival entities, no room was provided for space itself as a substance over and above the material substance of the world. All motion was then merely relative motion of one material thing in the reference frame fixed by another. The Leibnizian theory was one of great subtlety. In particular, the need for a modalized relationism to allow for ‘empty space’ was clearly recognized. An unoccupied spatial location was taken to be a spatial relation that could be realized but that was not realized in actuality. Leibniz also offered trenchant arguments against substantivalism. All of these rested upon some variant of the claim that a substantival picture of space allows for the theoretical toleration of alternative world models that are identical as far as any observable consequences are concerned.
Contending with Leibnizian relationalism was the ‘substantivalism’ of Isaac Newton (1642-1727), and his disciple S. Clarke, thereby he is mainly remembered for his defence of Newton (a friend from Cambridge days) against Leibniz, both on the question of the existence of absolute space and the question of the propriety of appealing to a force of gravity, actually Newton was cautious about thinking of space as a ‘substance’. Sometimes he suggested that it be thought of, rather, as a property - in particular as a property of the Deity. However, what was essential to his doctrine was his denial that a relationist theory, with its idea of motion as the relative change of position of one material object with respect to another, can do justice to the facts about motion made evident by empirical science and by the theory that does justice to those facts.
The Newtonian account of motion, like Aristotle’s, has a concept of natural or unforced motion. This is motion with uniform speed in a constant direction, so-called inertial motion. There is, then, in this theory an absolute notion of constant velocity motion. Such constant velocity motions cannot be characterized as merely relative to some material objects, some of which will be non-inertial. Space itself, according to Newton, must exist as an entity over and above the material objects of the world. In order to provide the standard of rest relative to which uniform motion is genuine inertial motion.
Such absolute uniform motions can be empirically discriminated from absolutely accelerated motion by the absence of inertial forces felt when the test object is moving genuinely inertially. Furthermore, the application of force to an object is correlated with the object’s change of absolute motion. Only uniform motions relative to space itself are natural motions requiring no force and explanation. Newton also clearly saw that the notion of absolute constant speed requires a motion of absolute time, for, relative to an arbitrary cyclic process as defining the time scale, any motion can be made uniform or not, as we choose. Nonetheless, genuine uniform motions are of constant speed in the absolute time scale fixed by ‘time itself; . Periodic processes can be at best good indicators of measures of this flaw of absolute time.
Newton’s refutation of relationism by means of the argument from absolute acceleration is one of the most distinctive examples of the way in which the results of empirical experiment and of the theoretical efforts to explain these results impinge on or upon philosophical objections to Leibnizian relationism - for example, in the claim that one must posit a substantival space to make sense of Leibniz’s modalities of possible position - it is a scientific objection to relationism that causes the greatest problems for that philosophical doctrine.
Then, again, a number of scientists and philosophers continued to defend the relationist account of space in the face of Newton’s arguments for substantivalism. Among them were Wilhelm Gottfried Leibniz, Christian Huygens, and George Berkeley when in 1721 Berkeley published De Motu (‘On Motion’) attacking Newton ‘s philosophy of space, a topic he returned too much later in The Analyst of 1734.the empirical distinction, however, to frustrate their efforts.
In the nineteenth century, the Austrian physicist and philosopher Ernst Mach (1838-1916), made the audacious proposal that absolute acceleration might be viewed as acceleration relative not to a substantival space, but to the material reference frame of what he called the ‘fixed stars’ - that is, relative to a reference frame fixed by what might now be called the ‘average smeared-out mass of the universe’. As far as observational data went, he argued, the fixed stars could be taken to be the frames relative to which uniform motion was absolutely uniform. Mach’s suggestion continues to play an important role in debates up to the present day.
The nature of geometry as an apparently a priori science also continued to receive attention. Geometry served as the paradigm of knowledge for rationalist philosophers, especially for Descartes and the Dutch Jewish rationalist Benedictus de Spinoza (1632-77), whereby the German philosopher Immanuel Kant (1724-1804) attempts to account for the ability of geometry to go beyond the analytic truths of logic extended by definition - was especially important. His explanation of the a priori nature of geometry by its ‘transcendentally psychological’ nature - that is, as descriptive of a portion of mind’s organizing structure imposed on the world of experience - served as his paradigm for legitimated a priori knowledge in general.
A peculiarity of Newton’s theory, of which Newton was well aware, was that whereas acceleration with respect to space itself had empirical consequences, uniform velocity with respect to space itself had none. The theory of light, particularly in J.C. Maxwell’s theory of electromagnetic waves, suggested, however, that there was only one reference frame in which the velocity of light would be the same in all directions, and that this might be taken to be the frame at rest in ‘space itself’. Experiments designed to find this frame seen to sow, however, that light velocity is isotropic and has its standard value in all frames that are in uniform motion in the Newtonian sense. All these experiments, however, measured only the average velocity of the light relative to the reference frame over a round-trip path.
It was the insight of the German physicist Albert Einstein (1879-1955) who took the apparent equivalence of all inertial frames with respect to the velocity of light to be a genuine equivalence, It was from an employment within the Patent Office in Bern, wherefrom in 1905 he published the papers that laid the foundation of his reputation, on the photoelectric theory of relativity. In 1916 he published the general theory and in 1933 Einstein accepted the position at the Princeton Institute for Advanced Studies which he occupied for the rest of his life. His deepest insight was to see that this required that we relativize the notion of the simultaneity of events spatially separated from one distanced between a non-simultaneous event’s reference frame. For any relativist, the distance between non-simultaneous events simultaneity is relative as well. This theory of Einstein’s later became known as the Special Theory of Relativity.
Eienstein’s proposal account for the empirical undetectability of the absolute rest frame by optical experiments, because in his account the velocity of light is isotropic and has its standard value in all inertial frames. The theory had immediate kinematic consequences, among them the fact that spatial separation (lengths) and intervals are frame-of motion-relative. A new dynamics was needed if dynamics were to be, as it was for Newton, equivalence in all inertial frames.
Einstein’s novel understanding of space and time was given an elegant framework by H. Minkowski in the form of Minkowski Space-time. The primitive elements of the theory were point-like. Locations in both space and time of unextended happenings. These were called the ‘event locations’ or the ‘events’‘ of a four-dimensional manifold. There is a frame-invariant separation of an event frame event called the ‘interval’. But the spatial separation between two noncoincident events, as well as their temporal separation, are well defined only relative to a chosen inertial reference frame. In a sense, then, space and time are integrated into a single absolute structure. Space and time by themselves have a derivative and relativized existence.
Whereas the geometry of this space-time bore some analogies to a Euclidean geometry of a four-dimensional space, the transition from space and time by them in an integrated space-time required a subtle rethinking of the very subject matter of geometry. ‘Straight lines’ are the straightest curves of this ‘flat’ space-time, however, they include ‘null straight lines’, interpreted as the events in the life history of a light ray in a vacuum and ‘time-like straight lines’, interpreted as the collection of events in the life history of a free inertial contribution to the revolution in scientific thinking into the new relativistic framework. The result of his thinking was the theory known as the general theory of relativity.
The heuristic basis for the theory rested on or upon an empirical fact known to Galileo and Newton, but whose importance was made clear only by Einstein. Gravity, unlike other forces such as the electromagnetic force, acts on all objects independently of their material constitution or of their size. The path through space-time followed by an object under the influence of gravity is determined only by its initial position and velocity. Reflection upon the fact that in a curved spac e the path of minimal curvature from a point, the so-called ‘geodesic’, is uniquely determined by the point and by a direction from it, suggested to Einstein that the path of as an object acted upon by gravity can be thought of as a geodesic followed by that path in a curved space-time. The addition of gravity to the space-time of special relativity can be thought of s changing the ‘flat’ space-time of Minkowski into a new, ‘curved’ space-time.
The kind of curvature implied by the theory in that explored by B. Riemann in his theory of intrinsically curved spaces of an arbitrary dimension. No assumption is made that the curved space exists in some higher-dimensional flat embedding space, curvature is a feature of the space that shows up observationally in those in the space longer straight lines, just as the shortest distances between points on the Earth’s surface cannot be reconciled with putting those points on a flat surface. Einstein (and others) offered other heuristic arguments to suggest that gravity might indeed have an effect of relativistic interval separations as determined by measurements using tapes’ spatial separations) and clocks (to determine time intervals).
The special theory gives a unified account of the laws of mechanics and of electromagnetism (including optics). Before 1905 the purely relative nature of uniform motion had in part been recognized in mechanics, although Newton had considered time to be absolute and also postulated absolute space. In electromagnetism the ‘ether’ was supposed to provide an absolute basis with respect to which motion could be determined and made two postulates. (The laws of nature are the same for all observers in uniform relative e motion. (2) The speed of light is the same for all such observes, independently of the relative motion of sources and detectors. He showed that these postulates were equivalent to the requirement that coordinates of space and time was used by different observers should be related by the ‘Lorentz Transformation Equation Theory’. The theory has several important consequences.
That is to say, a set of equations for transforming the position-motion parameter from an observer at point 0(x, y, z) to an observer at 0'(x’, y’, z’), moving relative to one another. The equations replace the ‘Galilean transformation equations of Newtonian mechanics in Relative problems. If the x-axis are chosen to pass through 00' and the time of an even t is (t) and (t’) in the frame of reference of the observer at 0 and 0' respectively y (where the zeros of their time scales were the instants that 0 and 0' coincided) the equations are:
x’ = β(x - vt)
y’ = y
z’ = z
t’ = β(t - vx/c2),
Where v is the relative velocity y of separation of 0, 0', c is the speed of light, and β is the function (1 - v2/c2).
The transformation of time implies that two events that are simultaneous according to one observer will not necessarily be so according to another in uniform relative motion. This does not affect in any way violate any concepts of causation. It will appear to two observers in uniform relative motion that each other’s clock rums slowly. This is the phenomenon of ‘time dilation’, for example, an observer moving with respect to a radioactive source finds a longer decay time than that found by an observer at rest with respect to it, according to:
Tv = T0/(1 - v2/c2)½,
Where Tv is the mean life measured by an observer at relative speed v. T0 is the mean life measured by an observer relatively at rest, and c is the speed of light.
Among the results of the ‘exact’ form optics is the deduction of the exact form io f the Doppler Effect. In relativity mechanics, mass, momentum and energy are all conserved. An observer with speed v with respect to a particle determines its mass to be m while an observer at rest with respect to the [article measure the ‘rest mass’ m0, such that:
m = m0/(1 - v2/c2)½
This formula has been verified in innumerable experiments. One consequence is that no body can be accelerated from a speed below c with respect to any observer to one above c, since this would require infinite energy. Einstein deduced that the transfer of energy δE by any process entailed the transfer of mass δm, where δE = δmc2, hence he concluded that the total energy E of any system of mass m would be given by:
E = mc2
The kinetic energy of a particle as determined by an observer with relative speed v is thus (m - m0)c2, which tends to the classical value ½mv2 if v ≪c.
Attempts to express Quantum Theory in terms consistent with the requirements of relativity were begun by Sommerfeld (1915). Eventually Dirac (1928) gave a relativistic formulation of the wave mechanics of conserved particles (fermions). This explained the concepts of sin and the associated magnetic moment for certain details of spectra. The theory led to results of elementary particles, the theory of Beta Decay, and for Quantum statistics, the Klein-Gordon Equation is the relativistic wave equation for ‘bosons’.
A mathematical formulation of the special theory of relativity was given by Minkowski. It is based on the idea that an event is specified by four coordinates: Three spatial coordinates and one of time. These coordinates define a four-dimensional space and time motion of a particle can be described by a curve in this space, which is called ‘Minkowski space-time’.
The special theory of relativity is concerned with relative motion between non-accelerated frames of reference. The general theory deals with general relative motion between accelerated frames of reference. In accelerated systems of reference, certain fictitious forces are observed, such as the centrifugal and Coriolis forces found in rotating systems. These are known as fictitious forces because they disappear when the observer transforms to a non-accelerated system. For example, to an observer in a car rounding a bend at constant velocity, objects in the car appear to suffer a force acting outwards. To an observer outside the car, this is simply their tendency to continue moving in a straight line. The inertia of the objects is seen to cause a fictitious force and the observer can distinguish between non-inertial (accelerated) and inertial (non-accelerated) frames of reference.
A further point is that, to the observer in the car, all the objects are given the same acceleration irrespective of their mass. This implies a connection between the fictitious forces arising from accelerated systems and forces due to gravity, where the acceleration produced is independent of the mass. For example, a person in a sealed container could not easily determine whether he was being driven toward the floor by gravity of if the container were in space and being accelerated upwards by a rocket. Observations extended between these alternatives, but otherwise they are indistinguishable from which it follows that the inertial mass is the same as a gravitational mass.
The equivalence between a gravitational field and the fictitious forces in non-inertial systems can be expressed by using ‘Riemannian space-time’, which differs from Minkowski space-time of the special theory. In special relativity the motion of a particle that is not acted on by any forces is presented by a straight line in Minkowski space-time. In general relativity, using Riemannian space-time, the motion is presented by a line that is no longer straight (in the Euclidean sense) but is the line giving the shortest distance. Such a line is called a ‘geodesic’. Thus, space-time is said to be curved. The extent of this curvature is given by the ‘metric tensor’ for space-time, the components of which are solutions to Einstein’s ‘field equations’. The fact that gravitational effects occur near masses is introduced by the postulate that the presence e of matter produces this curvature of space-time. This curvature of space-time controls the natural motions of bodies.
The predictions of general relativity only differ from Newton’s theory by small amounts and most tests of the theory have been carried out through observations in astronomy. For example, it explains the shift on the perihelion of Mercury, the bending of light in the presence of large bodies, and the Einstein shift. Very close agreements between their accurately measured values have now been obtained.
So, then, using the new space-time notions, a ’curved space-time’ theory of Newtonian gravitation can be constructed. In this space-time is absolute, as in Newton. Furthermore, space remains flat Euclidean space. This is unlike the general theory of relativity, where the space-time curvature can induce spatial curvature as well. But the space-time curvature of this ‘curved neo-Newtonian space-time’ shows up in the fact that particles under the influence of gravity do not follow straight lines paths. Their paths become, as in general relativity, the curved time-like geodesics of the space-time. In this curved space-time account of Newtonian gravity, as in the general theory of relativity, the indistinguishable alternative worlds of theories that take gravity as a force s superimposed in a flat space-time collapsed to a single world model.
The strongest impetus to rethink epistemological issues in the theory of space and time came from the introduction of curvature and of non-Euclidean geometries in the general theory of relativity. The claim that a unique geometry could be known to hold true of the world a priori seemed unviable, at least in its naive form. In a situation where our best available physical theory allowed for a wide diversity of possible geometries for the world and in which the geometry of space-time was one more dynamical element joining the other ‘variable’ features of the world. Of course, skepticism toward an a priori account of geometry could already have been induced by the change from space time to space-time in the special theory, even though the space of that world remained Euclidean.
The natural response to these changes in physics was to suggest that geometry was, like all other physical theories, believable only on the basis of some kind of generalizing inference from the law-like regularities among the observable observational data - that is, to become an empiricists with regard to geometry.
But a defence of a kind of a priori account had already been suggested by the French mathematician and philosopher Henri Jules Poincaré (1854-1912), even before the invention of the relativistic theories. He suggested that the limitation of observational data to the domain of what was both material and local, i.e., or, space-time in order to derive a geometrical world of matter and convention or decision on the part of the scientific community. If any geometric posit could be made compatible with any set of observational data, Euclidean geometry could remain a priori in the sense that we could, conventionally, decide to hold to it as the geometry of the world in the face of any data that apparently refuted it.
The central epistemological issue in the philosophy of space and time remains that of theoretical under-determination, stemming from the Poincaré argument. In the case of the special theory of relativity the question is the rational basis for choosing Einstein’s theory over, for example, on of the ‘aether reference frame plus modification of rods and clocks when they are in motion with respect to the aether’ theories tat it displaced. Among the claims alleged to be true merely by convention in the theory, for which of asserting the simultaneity of distant events, those asserting the ‘flatness’ of the chosen space-time. Crucial to the fact that Einstein’s arguments themselves presuppose a strictly delimited local observation basis for the theories and that in fixing on or upon the special theory of relativity, one must make posits about the space-time structure y that outrun the facts given strictly by observation. In the case of the general theory of relativity, the issue becomes one of justifying the choice of general relativity over, for example, a flat space-time theory that treats gravity, as it was treated by Newton, as a ‘field of force’ over and above the space-time structure.
In both the cases of special and general relativity, important structural features pick out the standard Einstein theories as superior to their alternatives. In particular, the standard relativistic models eliminate some of the problems of observationally equivalent but distinguishable worlds countenanced by the alternative theories. However, the epistemologists must still be concerned with the question as to why these features constitute grounds for accepting the theories as the ‘true’ alternatives.
Other deep epistemological issues remain, having to do with the relationship between the structures of space and time posited in our theories of relativity and the spatiotemporal structures we use to characterize our ‘direct perceptual experience’. These issues continue in the contemporary scientific context the old philosophical debates on the relationship between the ram of the directly perceived and the realm of posited physical nature.
First reaction on the part of some philosophers was to take it that the special theory of relativity provided a replacement for the Newtonian theory of absolute space that would be compatible with a relationist account of the nature of space and time. This was soon seen to be false. The absolute distinction between uniform moving frames and frames not in or upon its uniform motion, invoked by Newton in his crucial argument against relationism, remains in the special theory of relativity. In fact, it becomes an even deeper distinction than it was in the Newtonian account, since the absolutely uniformly moving frames, the inertial frames, now become not only the frames of natural unforced motion, but also the only frames in which the velocity of light is isotropic.
At least part of the motivation behind Einstein’s development of the general theory of relativity was the hope that in this new theory all reference frames, uniformly moving or accelerated, would be ‘equivalent’ to one another physically. It was also his hope that the theory would conform to the Machian idea of absolute acceleration as merely acceleration relative to the smoothed-out matter of the universe.
Further exploration of the theory, however, showed that it had many features uncongenial to Machianism. Some of these are connected with the necessity of imposing boundary conditions for the equation connecting the matter distribution of the space-time structure. General relativity certainly allows as solutions model universes of a non-Machian sort - for example, those which are aptly described as having the smoothed-out matter of the universe itself in ‘absolute rotation’. There are strong arguments to suggest that general relativity. Like Newton’s theory and like special relativity, requires the positing of a structure of ‘space-time itself’ and of motion relative to that structure, in order to account for the needed distinctions of kinds of motion in dynamics. Whereas in Newtonian theory it was ‘space itself’ that provided the absolute reference frames. In general relativity it is the structure of the null and time-like geodesics that perform this task. The compatibility of general relativity with Machian ideas is, however, a subtle matter and one still open to debate.
Other aspects of the world described by the general theory of relativity argue for a substantivalist reading of the theory as well. Space-time has become a dynamic element of the world, one that might be thought of as ‘causally interacting’ with the ordinary matter of the world. In some sense one can even attribute energy (and hence mass) to the spacer-time (although this is a subtle matter in the theory), making the very distinction between ‘matter’ and ‘spacer-time itself’ much more dubious than such a distinction would have been in the early days of the debate between substantivalists and explanation forthcoming from the substantivalist account is.
Nonetheless, a naive reading of general relativity as a substantivalist theory has its problems as well. One problem was noted by Einstein himself in the early days of the theory. If a region of space-time is devoid of non-gravitational mass-energy, alternative solutions to the equation of the theory connecting mass-energy with the space-time structure will agree in all regions outside the matterless ‘hole’, but will offer distinct space-time structures within it. This suggests a local version of the old Leibniz arguments against substantivalism. The argument now takes the form of a claim that a substantival reading of the theory forces it into a strong version of indeterminism, since the space-time structure outside the hld fails to fix the structure of space-time in the hole. Einstein’s own response to this problem has a very relationistic cast, taking the ‘real facts’ of the world to be intersections of paths of particles and light rays with one another and not the structure of ‘space-time itself’. Needless to say, there are substantival attempts to deal with the ‘hole’ argument was well, which try to reconcile a substantival reading of the theory with determinism.
There are arguments on the part of the relationist to the effect that any substantivalist theory, even one with a distinction between absolute acceleration and mere relative acceleration, can be given a relationistic formulation. These relationistic reformations of the standard theories lack the standard theories’ ability to explain why non-inertial motion has the features that it does. But the relationist counters by arguing that the explanation forthcoming from the substantivalist account is too ‘thin’ to have genuine explanatory value anyway.
Relationist theories are founded, as are conventionalist theses in the epistemology of space-time, on the desire to restrict ontology to that which is present in experience, this taken to be coincidences of material events at a point. Such relationist conventionalist account suffers, however, from a strong pressure to slide full-fledged phenomenalism.
As science progresses, our posited physical space-times become more and more remote from the space-time we think of as characterizing immediate experience. This will become even more true as we move from the classical space-time of the relativity theories into fully quantized physical accounts of space-time. There is strong pressure from the growing divergence of the space-time of physics from the space-time of our ‘immediate experience’ to dissociate the two completely and, perhaps, to stop thinking of the space-time of physics for being anything like our ordinary notions of space and time. Whether such a radical dissociation of posited nature from phenomenological experience can be sustained, however, without giving up our grasp entirely on what it is to think of a physical theory ‘realistically’ is an open question.
Science aims to represent accurately actual ontological unity/diversity. The wholeness of the spatiotemporal framework and the existence of physics, i.e., of laws invariant across all the states of matter, do represent ontological unities which must be reflected in some unification of content. However, there is no simple relation between ontological and descriptive unity/diversity. A variety of approaches to representing unity are available (the formal-substantive spectrum and respective to its opposite and operative directions that the range of naturalisms). Anything complex will support man y different partial descriptions, and, conversely, different kinds of thing s many all obey the laws of a unified theory, e.g., quantum field theory of fundamental particles or collectively be ascribed dynamical unity, e.g., self-organizing systems.
It is reasonable to eliminate gratuitous duplication from description - that is, to apply some principle of simplicity, however, this is not necessarily the same as demanding that its content satisfies some further methodological requirement for formal unification. Elucidating explanations till there is again no reason to limit the account to simple logical systemization: The unity of science might instead be complex, reflecting our multiple epistemic access to a complex reality.
Biology provides as useful analogy. The many diverse species in an ecology nonetheless, each map, genetically and cognitively, interrelatable aspects of as single environment and share exploitation of the properties of gravity, light, and so forth. Though the somantic expression is somewhat idiosyncratic to each species, and the incomplete representation, together they form an interrelatable unity, a multidimensional functional representation of their collective world. Similarly, there are many scientific disciplines, each with its distinctive domains, theories, and methods specialized to the condition under which it accesses our world. Each discipline may exhibit growing internal metaphysical and nomological unities. On occasion, disciplines, or components thereof, may also formally unite under logical reduction. But a more substantive unity may also be manifested: Though content may be somewhat idiosyncratic to each discipline, and the incomplete representation, together the disciplinary y contents form an interrelatable unity, a multidimensional functional representation of their collective world. Correlatively, a key strength of scientific activity lies, not formal monolithicity, but in its forming a complex unity of diverse, interacting processes of experimentations, theorizing, instrumentation, and the like.
While this complex unity may be all that finite cognizers in a complex world can achieve, the accurate representation of a single world is still a central aim. Throughout the history of physics. Significant advances are marked by the introduction of new representation (state) spaces in which different descriptions (reference frames) are embedded as some interrelatable perspective among many thus, Newtonian to relativistic space-time perspectives. Analogously, young children learn to embed two-dimensional visual perspectives in a three-dimensional space in which object constancy is achieved and their own bodies are but some among many. In both cases, the process creates constant methodological pressure for greater formal unity within complex unity.
The role of unity in the intimate relation between metaphysics and metho in the investigation of nature is well-illustrated b y the prelude to Newtonian science. In the millennial Greco-Christian religion preceding the founder of modern astronomy, Johannes Kepler (1571-1630), nature was conceived as essentially a unified mystical order, because suffused with divine reason and intelligence. The pattern of nature was not obvious, however, a hidden ordered unity which revealed itself to a diligent search as a luminous necessity. In his Mysterium Cosmographicum, Kepler tried to construct a model of planetary motion based on the five Pythagorean regular or perfect solids. These were to be inscribed within the Aristotelian perfect spherical planetary orbits in order, and so determine them. Even the fact that space is a three-dimensional unity was a reflection of the one triune God. And when the observational facts proved too awkward for this scheme. Kepler tried instead, in his Harmonice Mundi, to build his unified model on the harmonies of the Pythagorean musical scale.
Subsequently, Kepler trod a difficult and reluctant path to the extraction of his famous three empirical laws of planetary motion: Laws that made Newtonian revolution possible, but had none of the elegantly simple symmetries that mathematical mysticism required. Thus, we find in Kepler both the medieval methods and theories of metaphysically y unified religio-mathematical mysticism and those of modern empirical observation and model fitting. A transition figures in the passage to modern science.
To appreciate both the historical tradition and the role of unity in modern scientific method, consider Newton’s methodology, focussing just on Newton’s derivation of the law of universal gravitation in Principia Mathematica, book iii. The essential steps are these: (1) The experimental work of Kepler and Galileo (1564-1642) is appealed to, so as to establish certain phenomena, principally Kepler’s laws of celestial planetary motion and Galileo’s terrestrial law of free fall. (2) Newton’s basic laws of motion are applied to the idealized system of an object small in size and mass moving with respect to a much larger mass under the action of a force whose features are purely geometrically determined. The assumed linear vector nature of the force allows construction of the centre of a mass frame, which separates out relative from common motions: It is an inertial frame (one for which Newton’s first law of motion holds), and the construction can be extended to encompass all solar system objects.
(3) A sensitive equivalence is obtained between Kepler’s laws and the geometrical properties of the force: Namely, that it is directed always along the line of centres between the masses, and that it varies inversely as the square of the distance between them. (4) Various instances of this force law are obtained for various bodies in the heavens - for example, the individual planets and the moons of Jupiter. From this one can obtain several interconnected mass ratios - in particular, several mass estimates for the Sun, which can be shown to cohere mutually. (5) The value of this force for the Moon is shown to be identical to the force required by Galileo’s law of free fall at the Earth’s surface. (6) Appeal is made again to the laws of motion (especially the third law) to argue that all satellites and falling bodies are equally themselves sources of gravitational force. (7) The force is then generalized to a universal gravitation and is shown to explain various other phenomena - for example, Galileo’s law for pendulum action is shown suitably small, thus leaving the original conclusions drawn from Kepler’s laws intact while providing explanations for the deviations.
Newton’s constructions represent a great methodological, as well as theoretical achievement. Many other methodological components besides unity deserve study in their own right. The sense of unification is here that a deep systemization, as given the laws of motion, the geometrical form of the gravitational force and all its significant parameters needed for a complete dynamical description - that is, the component G, of the geometrical form of gravity Gm1m2/rn, - are uniquely determined from phenomenons and, after the of universal gravitation has been derived, it plus the laws of motion determine the space and time frames and a set of self-consistent attributions of mass. For example, the coherent mass attributions ground the construction of the locally inertial ventre of a mass frame, and Newton’s first law then enables us to consider time as a magnitude e: Equal tomes are those during which a freely moving body transverses equal distances. The space and time frames in turn ground use of the laws of motion, completing the constructive circle. This construction has a profound unity to it, expressed by the multiple interdependency of its components, the convergence of its approximations, and the coherence of its multiplying determined quantized. Newton’s Rule IV: (Loosely) do not introduce a rival theory unless it provides an equal or superior unified construction - in particular, unless it is able to measure its parameters in terms of empirical phenomena at least as thorough and cross-situationally invariably (Rule III) as done in current theory. this gives unity a central place in scientific method.
Kant and Whewell seized on this feature as a key reason for believing that the Newtonian account had a privileged intelligibility and necessity. Significantly, the requirement to explain deviations from Kepler’s laws through gravitational perturbations has its limits, especially in the cases of the Moon and Mercury: These need explanations. The former through the complexities of n-body dynamics (which may even show chaos) and the latter through relativistic theory. Today we no longer accept the truth, let alone the necessity, of Newton’s theory. Nonetheless, it remains a standard of intelligibility. It is in this role that it functioned, not jus t for Kant, but also for Reichenbach, and later Einstein and even Bohr: Their sense of crisis with regard to modern physics and their efforts to reconstruct it is best seen as stemming from their acceptance of an essentially recognition of the falsification o this ideal by quantum theory. Nonetheless, quantum theory represents a highly unified, because symmetry-preserving, dynamics, reveals universal constants, and satisfies the requirement of coherent and invariant parameter determinations.
Newtonian method provides a central, simple example of the claim that increased unification brings increased explanatory power. A good explanation increases our understanding of the world. And clearly a convincing story an do this. Nonetheless, we have also achieved great increases in our understanding of the world through unification. Newton was able to unify a wide range of phenomena by using his three laws of motion together with his universal law of gravitation. Among other things he was able to account for Johannes Kepler’s three was of planetary motion, the tides, the motion of the comets, projectile motion and pendulums. Still, his laws of planetary motion are the first mathematical, scientific, laws of astronomy of the modern era. They state (1) that the planets travel in elliptical orbits, with one focus of the ellipse being the sun. (2) That the radius between sun and planets sweeps equal areas in equal time, and (3) that the squares of the periods of revolution of any two planets are in the same ratio as the cube of their mean distance from the sun.
we have explanations by reference of causation, to identities, to analogies, to unification, and possibly to other factors, yet philosophically we would like to find some deeper theory that explains what it was about each of these apparently diverse forms of explanation that makes them explanatory. This we lack at the moment. Dictionary definitions typically explicate the notion of explanation in terms of understanding: An explanation is something that gives understanding or renders something intelligible. Perhaps this is the unifying notion. The different types of explanation are all types of explanation in virtue of their power to give understanding. While certainly an explanation must be capable of giving an appropriately tutored person a psychological sense of understanding, this is not likely to be a fruitful way forward. For there is virtually no limit to what has been taken to give understanding. Once upon a time, many thought that the facts that there were seven virtues and seven orifices of the human head gave them an understanding of why there were (allegedly) only seven planets. we need to distinguish between real and spurious understanding. And for that we need a philosophical theory of explanation that will give us the hall-mark of a good explanation.
In recent years, there has been a growing awareness of the pragmatic aspect of explanation. What counts as a satisfactory explanation depends on features of the context in which the explanation is sought. Willy Sutton, the notorious bank robber, is alleged to have answered a priest’s question, ‘Why do you rob banks’? By saying ‘That is where the money is’. we need to look at the context to be clear about for what exactly an explanation is being sought. Typically, we are seeking to explain why something is the case than something else. The question which Willy’s priest probably had in mind was: ‘Why do you rob banks rather than have a socially worthwhile jobs’? And not the question ‘Why do you rob banks rather than have a socially worthwhile jobs’? And not the question ‘Why do you rob banks rather than churches’? we also need to attend to the background information possessed by the questioner. If we are asked why a certain bird has a long beaks, it is no use answering (as the D-N approach might seem to license) that the birds are an Aleutian tern and all Aleutian terns have long beaks if the questioner already knows that it is an Aleutian tern. A satisfactory answer typically provides new information. In this case, however, the speaker may be looking for some evolutionary account of why that species has evolved long beaks. Similarly, we need to attend to the level of sophistication in the answer to be given. we do not provide the same explanation of some chemical phenomena to a school child as to a student of quantum chemistry.
Van Fraassen whose work has been crucially important in drawing attention to the pragmatic aspects of exaltation has gone further in advocating a purely pragmatic theory of explanation. A crucial feature of his approach is a notion of relevance. Explanatory answers to ‘why’ questions must be relevant but relevance itself is a function of the context for van Fraassen. For that reason he has denied that it even makes sense to talk of the explanatory power of a theory. However, his critics (Kitcher and Salmon) pint out that his notion of relevance is unconstrained, with the consequence that anything can explain anything. This reductio can be avoided only by developing constraints on the relation of relevance, constraints that will not be a functional forming context, hence take us away from a purely pragmatic approach to explanation.
The resolving result is increased explanatory power for Newton’s theory because of the increased scope and robustness of its laws, since the data pool which now supports them is the largest and most widely accessible, and it brings its support to bear on a single force law with only two adjustable, multiply determined parameters (the masses). Call this kind of unification (simpler than full constructive unification) ‘coherent unification’. As much has been made of these ideas in recent philosophy of method, representing something of a resurgence of the Kant-Whewell tradition.
Unification of theories is achieved when several theories T1, T2, . . . Tn previously regarded s distinct are subsumed into a theory of broader scope T*. Classical examples are the unification of theories of electricity, magnetism, and light into Maxwell’s theory of electrodynamics. And the unification of evolutionary and genetic theory in the modern synthetic thinking.
In some instances of unification, T* logically entails T1, T2, . . . Tn under particular assumptions. This is the sense in which the equation of state for ideal gases: pV = nRT, is a unification of Boyle’s law, pV = constant for constant temperature, and Charle’s law, V/T = constant for constant pressure. Frequently, however, the logical relations between theories involve in unification are less straightforward. In some cases, the claims of T* strictly contradict the claim of T1, T2, . . . Tn. For instance, Newton’s inverse-square law of gravitation is inconsistent with Kepler’s laws of planetary motion and Galileo’s law of free fall, which it is often said to have unified. Calling such an achievement ‘unification’ may be justified by saying that T* accounts on its own for the domains of phenomena that had previously been treated by T1, T2, . . . Tn. In other cases described as unification, T* uses fundamental concepts different from those of T1, T2, . . . Tn so the logical relations among them are unclear. For instance, the wave and corpuscular theories of light are said to have been unified in quantum theory, but the concept of the quantum particle is alien to classical theories. Some authors view such cases not as a unification of the original T1, T2, . . . Tn, but as their abandonment and replacement by a wholly new theory T* that is incommensurable with them.
Standard techniques for the unification of theories involve isomorphism and reduction. The realization that particular theories attribute isomorphic structures to a number of different physical systems may point the way to a unified theory that attributes the same structure to all such systems. For example, all instances of wave propagation are described by the wave equation:
∂2y/∂x2 = (∂2y/∂t2)/v2
Where the displacement y is given different physical interpretations in different instances. The reduction of some theories to a lower-level theory, perhaps through uncovering the micro-structure of phenomena, may enable the former to be unified into the latter. For instance, Newtonian mechanics represent a unification of many classical physical theories, extending from statistical thermodynamics to celestial mechanics, which portray physical phenomena as systems of classical particles in motion.
Alternative forms of theory unification may be achieved on alternative principles. A good example is provided by the Newtonian and Leibnizian programs for theory unification. The Newtonian program involves analysing all physical phenomena as the effects of forces between particles. Each force is described by a causal law, modelled on the law of gravitation. The repeated application of these laws is expected to solve all physical problems, unifying celestial mechanics with terrestrial dynamics and the sciences of solids and of fluids. By contrast, the Leibnizian program proposes to unify physical science on the basis of abstract and fundamental principles governing all phenomena, such as principles of continuity, conservation, and relativity. In the Newtonian program, unification derives from the fact that causal laws of the same form apply to every event in the universe: In the Leibnizian program, it derives from the fact that a few universal principles apply to the universe as a whole. The Newtonian approach was dominant in the eighteenth and nineteenth centuries, but more recent strategies to unify physical sciences have hinged on or upon the formulating universal conservation and symmetry principles reminiscent of the Leibnizian program.
There are several accounts of why theory unification is a desirable aim. Many hinge on simplicity considerations: A theory of greater generality is more informative than a set of restricted theories, since we need to gather less information about a state of affairs in order to apply the theory to it. Theories of broader scope are preferable to theories of narrower scope in virtue of being more vulnerable to refutation. Bayesian principles suggest that simpler theories yielding the same predictions as more complex ones derive stronger support from common favourable evidence: On this view, a single general theory may be better confirmed than several theories of narrower scope that are equally consistent with the available data.
Theory unification has provided the basis for influential accounts of explanation. According to many authors, explanation is largely a matter of unifying seemingly independent instances under a generalization. As the explanation of individual physical occurrences is achieved by bringing them within th scope of a scientific theory, so the explanation of individual theories is achieved by deriving them from a theory of a wider domain. On this view, T1, T2, . . . Tn, are explained by being unified into T*.
The question of what theory unification reveals about the world arises in the debate between scientific realism and instrumentals. According to scientific realists, the unification of theories reveals common causes or mechanisms underlying apparently unconnected phenomena. The comparative case with which scientists interpretation, realists maintain, but can be explained if there exists a substrate underlying all phenomena composed of real observable and unobservable entities. Instrumentalists provide a mythological account of theory unification which rejects these ontological claims of realism and instrumentals.
Arguments in a like manner, are of statements which purported provides support for another. The statements which purportedly provide the support are the premises while the statement purportedly supported is the conclusion. Arguments are typically divided into two categories depending on the degree of support they purportedly provide. Deduction arguments purportedly provide conclusive arguments purportedly provide any probable support. Some, but not all, arguments succeed in supporting arguments, successful in providing support for their conclusions. Successful deductive arguments are valid while successful inductive arguments are strong. An argument is valid just in case if all its ptr=muses are true then its conclusion must be true. An argument is strong just in case if all its premises are true its conclusion is only probable. Deductive logic provides methods for ascertaining whether or not an argument is valid whereas inductive logic provides methods for ascertaining the degree of support the premiss of an argument confer on its conclusion.
The argument from analogy is intended to establish our right to believe in the existence and nature of ‘other minds’, it admits that it is possible that the objects we call persona are, other than themselves, mindless automata, but claims that we nonetheless have sufficient reason for supposing this are not the case. There is more evidence that they cannot mindless automata than that they are:
The classic statement of the argument comes from J.S. Mill. He wrote:
I am conscious in myself of a series of facts connected by an
uniform sequence, of which the beginning is modification
of my body, the middle, in the case of other human beings, I have
the evidence of my senses for the first and last links of the series, but not for the intermediate link. I find, however, that the sequence
Between the first and last is regular and constant in the other
cases as it is in mine. In my own case I know that the first link produces the last through the intermediate link, and could not produce it without. Experience, therefore, obliges me to conclude that there must
be an intermediate link, which must either be the same in others
as in myself, or a different one, . . . by supposing the link to be of the Same nature . . . I confirm to the legitimate rules of experimental enquiry.
As an inductive argument this is very weak, because it is condemned to arguing from a single case. But to this we might reply that nonetheless, we have more evidence that there is other minds than that there is not.
The real criticism of the argument is due to the Austrian philosopher Ludwig Wittgenstein (1889-1951). It is that the argument assumes that we at least understand the claims that there are subjects of experience other than themselves, who enjoy experiences which are like ours but not ours: It only asks what reason we have to suppose that claim true. But if the argument does indeed express the ground of our right to believe in the existence of others. It is impossible to explain how we are able to achieve that understanding. So if there is a place for argument from analogy, the problem of other minds - the real, hard problem, which is how we acquire a conception of another mind - is insoluble. The argument is either redundant or worse.
Even so, the expression ‘the private language argument’ is sometimes used broadly to refer to a battery of arguments in Wittgenstein’s ‘Philosophical Investigations’, which are concerned with the concepts of, and relations between, the mental and its behavioural manifestations (the inner and the outer), self-knowledge and knowledge of other’s mental states. Avowals of experience and description of experiences. It is sometimes used narrowly to refer to a single chain of argument in which Wittgenstein demonstrates the incoherence of the idea that sensation names and names of experiences given meaning by association with a mental ‘object’, e.g., the word ‘pain’ by association with the sensation of pain, or by mental (private) ‘ostensive definition’. In which a mental ‘entity’ supposedly functions as a sample, e.g., a mental image, stored in memory y, is conceived as providing a paradigms for the application of the name.
A ‘private language’ is not a private code, which could be cracked by another person, nor a language spoken by only one person, which could be taught to others, but a putative language, the individual words of which refer to what can (apparently) are known only by the speaker, i.e., to his immediate private sensations or, to use empiricist jargon, to the ‘ideas’ in his mind. It has been a presupposition of the mainstream of modern philosophy, empiricist, rationalist and Kantian alike, of representationalism that the languages we speak are such private languages, that the foundations of language no less than the foundations of knowledge lie in private experience. To determine this picture with all its complex ramifications is the purpose of Wittgenstein’s private arguments.
There are various ways of distinguishing types of foundationalist epistemology, whereby Plantinga (1983) has put forward an influential conception of ‘classical foundationalism’, specified in terms of limitations on the foundations. He construes this as a disjunction of ancient and medieval foundationalism’, which takes foundations to comprise what is self-evident and ‘evident to the senses’ and ‘modern foundationalism’, that replaces ‘evidently to the senses’ with ‘incorrible’, which in practice what taken to apply to beliefs about one’s present states of consciousness. Plantinga himself developed this notion in the context of arguing that items outside this territory, in particular certain beliefs about God, could also be immediately justified. A popular recent distinction is between what is variously called ‘strong’ or ‘extreme’ foundationalism and ‘moderate’ or ‘minimal’ foundationalism, with the distinction depending on whether various epistemic immunities are required of foundations. Finally, ‘simple’ and ‘iterative’ foundationalism are dependent on whether it is required of as foundations only that it is immediately justified, or whether it is also required that the higher level belief that the former belief is immediately justified is itself immediately justified.
However, classic opposition is between foundationalism and coherentism. Coherentism denies any immediate justification. It deals with the regress argument by rejecting ‘linear’ chains of justification and, in effect, taking the total system of belief to be epistemically primary. A particular belief is justified to the extent that it is integrated into a coherent system of belief. More recently, ‘pragmatists’ like American educator, social reformer and philosopher of pragmatism John Dewey (1859-1952), have developed a position known as contextualism, which avoids ascribing any overall structure to knowledge. Questions concerning justification can only arise in particular context, defined in terms of assumptions that are simply taken for granted, though they can be questioned in other contexts, where other assumptions will be privileged.
Meanwhile, it is, nonetheless, the idea that the language each of us speaks is essentially private, that leaning a language is a matter of associating words with, or ostensibly defining words by reference to, subjective experience (the ‘given’), and that communication is a matter of stimulating a pattern of associations in the mind of the hearer qualitatively identical with what in the mind of the speaker is linked with multiple mutually supporting misconceptions about language, experiences and their identity, the mental and its relation to behaviour, self-knowledge and knowledge of the states of minds of others.
1. The idea that there can be such a thing as a private language is one manifestation of a tactic committed to what Wittgenstein called ‘Augustine’s picture of language’ - pre-theoretical picture according to which the essential function of words is to name items in reality, that the link between word and world is affected by ‘ostensive definition’, and describe a state of affairs. Applied to the mental, this knows that what a psychological predicate such as ‘pain’ means if one knows, is acquainted with, what it stands for - a sensation one has. The word ‘pain’ is linked to the sensation it names by way of private ostensive definition, which is affected by concentration (the subjective analogue of pointing) on the sensation and undertaking to use the word of that sensation. First-person present tense psychological utterances, such as ‘I have a pain’ are conceived to be descriptions which the speaker, as it was, reads off the facts which are private accessibility to him.
2. Experiences are conceived to be privately owned and inalienable - no on else can have my pain, but not numerically, identical with mine. They are also thought to be epistemically private - only I really know that what I have is a pain, others can at best only believe or surmise that I am in pain.
3. Avowals of experience are expressions of self-knowledge. When I have an experience, e.g., a pain, I am conscious or aware that I have by introspection (conceived as a faculty of inner sense). Consequently, I have direct or immediate knowledge of my subjective experience. Since no one else can have what I have, or peer into my mind, my access is privileged. I know, and an certain, that I have a certain experience whenever I have it, for I cannot doubt that this, which I now have, in a pain.
4. One cannot gain introspective access to the experience of others, so one can obtain only indirect knowledge or belief about them. They are hidden behind the observable, behaviour, inaccessible to direct observation, and inferred either analogically. Whereby, this argument is intended to establish our right to believe in the existence and nature of other minds. It admits it is possible that the objects we call persons are, other than ourselves, mindless automata, but claims that we nonetheless, have sufficient reason for supposing this not to be the case. There is more evidence that they are not mindless automata than they are.
The real criticism of the argument is du e to Wittgenstein (1953). It is that the argument assumes that we at least understand the claims that there are subjects of experience other than ourselves, who enjoy experiences which are like ours but not ours: It only asks what reason we have to suppose that claim true. But if the argument does indeed express the ground of our right to believe in the existence of others, it is impossible to explain how we are able to achieve that understanding. So if there is a place for argument from analogy, the problem of other minds - the real, hard problem, which is how we acquire a conception of another mind - is insoluble. The argument is either redundant or worse.
Even so, the inference to the best explanation is claimed by many to be a legitimate form of non-deductive reasoning, which provides an important alternative to both deduction and enumerative induction. Indeed, some would claim that it is only through reasoning to the best explanation that one can justify beliefs about the external world, the past, theoretical entities in science, and even the future. Consider belief about the external world and assume that we know what we do about the external world through our knowledge of the subjective and fleeting sensations. It seems obvious that we cannot deduce any truths about the existence of physical objects from truths describing the character of our sensations. Bu t either can we observe a correlation between sensations and something other than sensations since by hypothesis all we ever nave to rely on ultimately is knowledge of our sensations. Nevertheless, we may be able to posit physical objects as the best explanation for the character and order of our sensations. In the same way, various hypotheses about the past, might best be explained by present memory: Theoretical postulates in physics might best explain phenomena in the macro-world. And it is even possible that our access to the future to explain past observations. But what exactly is the form of an inference to the best explanation? However, if we are to distinguish between legitimate and illegitimate reasoning to the best explanation it would seem that we need a more sophisticated model of the argument form. It would seem that in reasoning to an explanation we need ‘criteria’ for choosing between alternative explanation. If reasoning to the best explanation is to constitute a genuine alterative to inductive reasoning, it is important that these criteria not be implicit premises which will convert our argument into an inductive argument
However, in evaluating the claim that inference to best explanation constitutes a legitimate and independent argument form, one must explore the question of whether it is a contingent fact that at least most phenomena have explanations and that explanations that satisfy a given criterion, simplicity, for example, is more likely to be correct and writers of texts, if the universe structure in such a way that simply, powerful, familiar explanations were usually the correct explanation. It is difficult to avoid the conclusion that this is true It would be an empirical fact about our universe discovered only a posterior. If the reasoning to the best explanation relies on such criteria, it seems that one cannot without circularity use reasoning to the best explanation to discover that the reliance on such criteria is safe. But if one has some independent was of discovering that simple, powerful, familiar explanations are more often correct, then why should we think that reasoning of the best explanation is an independent source of information about the world? Indeed, why should we not conclude that it would be more perspicuous to represent the reasoning this way. That is, simply an instance of familiar inductive reasoning.
5. The observable behaviour from which we thus infer consists of bare bodily movements caused by inner mental events. The outer (behaviour) are not logically connected with the inner (the mental). Hence, the mental are essentially private, known ‘strictu sensu’, only to its owner, and the private and subjective is better known than the public.
The resultant picture leads first to scepticism then, ineluctably to ‘solipsism’. Since pretence and deceit are always logically possible, one can never be sure whether another person is really having the experience behaviourally appears to be having. But worse, if a given psychological predicate means ‘this’ (which I have no one else could logically have - since experience is inalienable), then any other subjects of experience. Similar scepticism about defining samples of the primitive terms of a language is private, then I cannot be sure that what you mean by ‘red’ or ‘pain’ is not quantitatively identical with what I mean by ‘green’ or ‘pleasure’. And nothing can stop us frm concluding that all languages are private and strictly mutually unintelligible.
Philosophers had always been aware of the problematic nature of knowledge of other minds and of mutual intelligibly of speech of their favour red picture. It is a manifestation of Wittgenstein’s genius to have launched his attack at the point which seemed incontestable - namely, not whether I can know of the experiences of others, but whether I can understand the ‘private language’ of another in attempted communication, but whether I can understand my own allegedly private language.
The functionalist thinks of ‘mental states’ and events as causally mediating between a subject’s sensory inputs and that subject’s ensuing behaviour that what makes a mental state the doctrine that what makes a mental state the type of state it is - a pain, a smell of violets, a belief that koalas are dangerous - is the functional relation it bears to the subject’s perceptual stimuli it beards to the subject’s perceptual stimuli, behavioural responses and other mental states. That’s not to say, that, functionalism is one of the great ‘isms’ that have been offered as solutions to the mind/body problem. The cluster of questions that all of these ‘isms’ promise to answer can be expressed as: What is the ultimate nature of the mental? At the most overall level, what makes a mental state mental? At the more specific level that has been the focus in recent years: What do thoughts have in common in virtue of which they are thoughts? That is, what makes a thought a thought? What makes a pain a pain? Cartesian Dualism said the ultimate nature of the mental of the mental was said the ultimate nature of the mental was to be found in a special mental substance. Behaviouralism identified mental states with behavioural disposition: Physicalism in its most influential version identifies mental states with brain states. Of course, the relevant physical state s are various sorts of neutral states. Our concepts of mental states such as thinking, and feeling are of course different from our concepts of neural states, of whatever.
Disaffected by Cartesian dualism and from the ‘first-person’ perspective of introspective psychology, the behaviouralists had claimed that there is nothing to the mind but the subject’s behaviour and disposition to behave equally well against the behavioural betrayal, behaving just as pain-free human beings, would be the right sort of case. For example, for Rudolf to be in pain is for Rudolf to be either behaving in a wincing-groaning-and-favouring way or disposed to do so (in that not keeping him from doing so): It is nothing about Rudolf’s putative inner life or any episode taking place within him.
Though behaviourism avoided a number of nasty objects to dualism (notably Descartes’ admitted problem of mind-body interaction), some theorists were uneasy, they felt that it its total repudiation of the inner, behaviourism was leaving out something real and important. U.T. Place spoke of an ‘intractable residue’ of conscious mental items that bear no clear relations to behaviour of any particular sort. And it seems perfectly possible for two people to differ psychologically despite total similarity of their actual and counter-factual behaviour, as in a Lockean case of ‘inverted spectrum’: For that matter, a creature might exhibit all the appropriate stimulus-response relations and lack mentation entirely.
For such reasons, Place and the Cambridge-born Australian philosopher J.J.C. Smart proposed a middle way, the ‘identity theory’, which allowed that at least some mental states and events are genuinely inner and genuinely episodic after all: They are not to be identified with outward behaviour or even with hypothetical disposition to behave. But, contrary to dualism, the episodic mental items are not ghostly or non-physical either. Rather, they are neurophysiological of an experience that seems to resist ‘reduction’ in terms of behaviour. Although ‘pain’ obviously has behavioural consequences, being unpleasant, disruptive and sometimes overwhelming, there is also something more than behaviour, something ‘that it is like’ to be in pain, and there is all the difference in the world between pain behaviour accompanied by pain and the same behaviour without pain. Theories identifying pain with neural events subserving it have been attacked, e.g., Kripke, on the grounds that while a genuine metaphysical identity y should be necessarily true, the association between pain and any such events would be contingent.
Nonetheless, the American philosopher’s Hilary Putnam (1926-) and American philosopher of mind Alan Jerry Fodor (1935-), pointed out a presumptuous implication of the identity theory understood as a theory of types or kinds of mental items: That a mental type such s pain has always and everywhere the neurophysiological characterization initially assigned to it. For example, if the identity theorist identified pain itself with the firing of c-fibres, it followed that a creature of any species (earthly or science-fiction) could be in pain only if that creature had c-fibres and they were firing. However, such a constraint on the biology of any being capable of feeling pain is both gratuitous and indefensible: Why should we suppose that any organism must be made of the same chemical materials as us in order to have what can be accurately recognized pain? The identity theorists had overreacted to the behaviourists’ difficulties and focussed too narrowly on the specifics of biological humans’ actual inner states, and in doing so, they had fallen into species chauvinism.
Fodor and Putnam advocated the obvious correction: What was important, were no t being c-fibres (per se) that were firing, but what the c-fibres were doing, what their firing contributed to the operation of the organism as a whole? The role of the c-fibres could have been preformed by any mechanically suitable component s long as that role was performed, the psychological containment for which the organism would have been unaffected. Thus, to be in pain is not per se, to have c-fibres that are firing, but merely to be in some state or other, of whatever biochemical description that play the same functional role as did that plays the same in the human beings the firing of c-fibres in the human being. we may continue to maintain that pain ‘tokens’, individual instances of pain occurring in particular subjects at particular neurophysiological states of these subjects at those times, throughout which the states that happed to be playing the appropriate roles: This is the thesis of ‘token identity’ or ‘token physicalism’. But pan itself (the kind, universal or type) can be identified only with something mor e abstract: th e caudal or functional role that c-fibres share with their potential replacements or surrogates. Mental state- and identified not with neurophysiological types but with more abstract functional roles, as specified by ‘stare-tokens’ relations to the organism’s inputs, outputs and other psychological states.
Functionalism has in itself the distinct souses for which Putnam and Fodor saw mental states in terms of an empirical computational theory of the mind, also, Smart’s ‘topic neutral’ analyses led Armstrong and Lewis to a functional analysis of mental concepts. While Wittgenstein’s idea of meaning as use led to a version of functionalism as a theory of meaning, further developed by Wilfrid Sellars (1912-89) and later Harman.
One motivation behind functionalism can be appreciated by attention to artefact concepts like ‘carburettor’ and biological concepts like ‘kidney’. What it is for something to be a carburettor is for it to mix fuel and air in an internal combustion engine - carburettor is a functional concept. In the case of ‘kidney’, the scientific concept is functional - defined in terms of a role in filtering the blood and maintaining certain chemical balances.
The kind of function relevant to the mind can be introduced through the parity-detecting automaton, wherefore according to functionalism, all there is to being in pain is having to say ‘ouch’, wonder whether you are ill, and so forth. Because mental states in this regard, entail for its method for defining automaton states is supposed to work for mental states as well. Mental states can be totally characterized in terms that involve only logico-mathematical language and terms for input signals and behavioural outputs. Thus, functionalism satisfied one of the desiderata of behaviourism, characterized the mental in entirely non-mental language.
Suppose we have a theory of mental states that specifies all the causal relations among the stats, sensory inputs and behavioural outputs. Focussing on pain as a sample, mental state, it might say, among other things, that sitting on a tack causes pain an that pain causes anxiety and saying ‘ouch’. Agreeing for the sake of the example, to go along with this moronic theory, functionalism would then say that could define ‘pain’ as follows: Bing in pain - being in the first of two states, the first of which is causes by sitting on tacks, and which in turn cases the other state and emitting ‘ouch’. More symbolically:
Being in pain = Being an x such that ∃
P ∃ Q[sitting on a tack cause s P and P
causes both Q and emitting ‘ouch; and
x is in P]
More generally, if T is a psychological theory with ‘n’ mental terms of which the seventeenth is ‘pain’, we can define ‘pain’ relative to T as follows (the ‘F1' . . . ‘Fn’ are variables that replace the ‘n’ mental terms):
Being in pain = Being an x such that ∃
F1 . . . Fn[T(F1 . . . Fn) & x is in F17]
The existentially quantified part of the right-hand side before the ‘&’ is the Ramsey sentence of the theory ‘T’. In this ay, functionalism characterizes the mental in non-mental terms, in terms that involve quantification over realization of mental states but no explicit mention of them: Thus, functionalism characterizes the mental in terms of structures that are tacked down to reality only at the inputs and outputs.
The psychological theory ‘T’ just mentioned can be either an empirical psychological theory or else a common-sense ‘folk’ theory, and the resulting functionalisms are very different. In the former case, which is named ‘psychofunctionalism’. The functional definitions are supposed to fix the extensions of mental terms. In the latter case, conceptual functionalism, the functional definitions are aimed at capturing our ordinary mental concepts. (This distinction shows an ambiguity in the original question of what the ultimate nature of the mental is.) The idea of psychofunctionalism is that the scientific nature of the mental consists not in anything biological, but in something ‘organizational’, analogous to computational structure. Conceptual functionalism, by contrast, can be thought of as a development of logical behaviouralism. Logical behaviouralisms thought that pain was a disposition to pan behaviour. But as the Polemical British Catholic logician and moral philosopher Thomas Peter Geach (1916-) and the influential American philosopher and teacher Milton Roderick Chisholm (1916-99) pointed out, what counts as pain behaviour depends on the agent’s belief and desires. Conceptual functionalism avoid this problem by defining each mental state in terms of its contribution to dispositions to behave - and have other mental states.
The functional characterization is given to assume a psychological theory with a finite number of mental state terms. In the case of monadic states like pain, the sensation of red, and so forth. It does seem a theoretical option to simply list the states and the=ir relations to other states, inputs and outputs. But for a number of reasons, this is not a sensible theoretical option for belief-states, desire-states, and other propositional-attitude states. For on thing, the list would be too long to be represented without combinational methods. Indeed, there is arguably no upper bound on the number of propositions anyone which could in principle be an object of thought. For another thing, there are systematic relations among belies: For example, the belief that ‘John loves Mary’. Ann the belief that ‘Mary loves John’. These belief-states represent the same objects as related to each other in converse ways. But a theory of the nature of beliefs can hardly just leave out such an important feature of them. we cannot treat ‘believes-that-grass-is-green’, ‘believes-that-grass-is-green], and so forth, as unrelated’, as unrelated primitive predicates. So we will need a more sophisticated theory, one that involves some sort of combinatorial apparatus. The most promising candidate are those that treat belief as a relation. But a relation to what? There are two distinct issues at hand. One issue is how to formulate the functional theory, for which our acquiring of knowledge-that acquires knowledge-how, abilities to imagine and recognize, however, the knowledge acquired can appear in embedded as contextually represented. For example, reason commits that if this is what it is like to see red, then this similarity of what it is like to see orange, least of mention, that knowledge has the same problem as to infer that non-cognitive analysis of ethical language have in explaining the logical behaviour of ethical predicates. For a suggestion in terms of a correspondence between the logical relations between sentences and the inferential relations among mental states. A second issue is that types of states could possibly realize the relational propositional attitude states. Fodor (1987) has stressed the systematicity of propositional attitudes and further points out that the beliefs whose contents are systematically related exhibit th e following sort of empirical relation: If one is capable of believing that Mary loves John, one is also capable of believing that John love Mary. Jerry Fodor argues that only a language of thought in the brain could explain this fact.
Jerry Alan Fodor (1935-), an American philosopher of mind who is well known for a resolute realism about the nature of mental functioning. Taking the analogy between thought and computation seditiously. Fodor believes that mental representations should be conceived as individual states with their own identities and structure, like formulae transformed by processes of computation or those of the ‘Holist’ such as Donald Herbert Davidson (1917-2003) or, ‘instrumentalists about mental ascriptions, such as Daniel Clement Dennett (1952). In recent years he has become a vocal critic of some of the aspirations of cognitive science, literaturizing such books as ‘Language of Thought’ (1975, ‘The Modularity of Mind (1983), ‘Psychosemantics (1987), ‘The Elm and the Expert(1994), ‘Concepts: Where Cognitive Science went Wrong’ (1998), and ‘Hume Variations ‘(2003).
Purposively, ‘Folk psychology’ is primarily ‘intentional explanation’: It’s the idea that people’s behaviour can be explained b yy reference to the contents of their beliefs and desires. Correspondingly, the method-logical issue is whether intentional explanation can be co-opted to make science out of. Similar questions might be asked about the scientific potential of other folk-psychological concepts (consciousness for example), but, what make s intentional explanation problematic is that they presuppose that there are intentional states. What makes intentional states problematic is that they exhibit a pair of properties assembled in the concept of ‘intentionality’, in its current use the expression ‘intentionality refers to that property of the mind by which it is directed at, about, or of objects and stat es of affairs in the world. Intentionality, so defined, includes such mental phenomena as belief, desire, intention, hope, fear, memory, hate, lust, disgust, and memory as well as perception and intentional action, however, there is in remaining that of:
(1) Intentional states have causal powers. Thoughts (more precisely, having of thoughts) make things happen: Typically, thoughts make behaviour happen. Self-pit y can make one weep, as can onions.
(2) Intentional states are semantically evaluable, beliefs, for example, area about how things are and are therefore true or false depending on whether things are the way that they are believed to be. Consider, by contrast, tables, chairs, onions, and the cat’s being on the mat. Though they all have causal powers they are not about anything and are therefore not evaluable as true or false.
If there is to be an intentional science, there must be semantically evaluable things that have causal powers. Moreover, there must be laws about such things, including, in particular, laws that relate beliefs and desires to one another and to actions. If there are no intentional laws, then there is no intentional science. Perhaps, scientific explanation is not always explanation by law subsumption, but surely if often is, and there is no obvious reason why an intentional science should be exceptional in this respect. Moreover, one of the best reasons for supposing that common sense is right about there being intentional states is precisely that there seem to be many reliable intentional generalizations for such states to fall under. It is for us to assume that many of the truisms of folk psychology either articulate intentional laws or come pretty close doing so.
So, for example, it is a truism of folk psychology that rote repetition facilitates recall. (Moreover, and most generally, repetition improves performance ‘How do you get to Carnegie Hall’?) This generalization relates the content to what you learn to the content of what you say to yourself while you are learning it: So, what it expresses, is, ‘prima facie’, a lawful causal relation between types of intentional states. Real psychology y has lots more to say on this topic, but it is, nonetheless, much more of the same. To a first approximation, repetition does causally facilitate recall, and that it does is lawful.
There are, to put it mildly, many other case of such reliable intentional causal generalizations. There are also many, many kinds of folk psychological generalizations about ‘correlations’ among intentional states, and these to are plausible candidates for flushing out as intentional laws. For example that anyone who knows what 7 + 5 is also to know what 7+ 6 is: That anyone who knows what ‘John love’s Mary’ means who knows what ‘Mary love’s John’ means, and so forth.
Philosophical opinion about folk psychological intentional generalizations runs the gamut from ‘there are not any that are really reliable’ to ‘they are all platitudinously true, hence not empirical at all. Nevertheless, suffice to say, that the necessity of ‘if 7 +5 = 12 then 7 + 6 =13' is quite compatible with the ‘contingency’ of ‘if someone knows that 7 + 5 = 12, then he knows that 7 + 6 =13: And, then, part of the question ‘how can there be an intentional science’ is ‘how can there be an intentional practice of law’?
Let us assume most generally, that laws support counterfactuals and are confirmed by their instances. Further, to assume that every law is either basic or not. Basic laws are either exceptionless or intractably statistical. The only basic laws are laws of basic physics.
All Non-basic laws, including the laws of all the Non-basic sciences, including, in particular, the intentional laws of psychology, are ‘c[eteris] p[aribus] laws: They hold only ‘all else being equal’. There is - anyhow there ought to be - a whole department of the philosophy of science devoted to the construal of cp laws: To making clear, for instances, how they can be explanatory, how they can support counterfactuals, how they can subsume the singular causal truths that instance them . . . and so forth. Omitting only these issues in what gives presence to the future, is, because they do not belong to philosophical psychology as such. If the laws of intentional psychology is a special, I, e., Non-basic science. Not because it is an intentional science.
There is a further quite general property that distinguishes cp laws from basic ones: Non-basic laws want mechanisms for their implementation. Suppose, for a working example, that some special science states that being ‘F’ causes xs to be ‘G’. (Being irradiated by sunlight causes plants to photo-synthesize, as for being freely suspended near the earth’s surface causes bodies to fall with uniform accelerating, and so on.) Then it is a constraint on this generalization’s being lawful that ‘How does, being ‘F’ cause’s xs to be ‘G’? There must be an answer, this is, however, if we are continued to suppose that one of the ways special science; laws are different from basic laws. A basic law says that ‘F’s causes (or are), if there were, perhaps that aby explaining how, or why, or by what means F’s cause G’s, the law would have not been basic but derived.
Typically - though variably - the mechanism that implements a special science law is defined over the micro-structure of the thing that satisfy the law. The answer to ‘how does sunlight make plants photo-synthesize’? Its function implicates the chemical structure of plants: The answer to ‘how does freezing make water solid’? This question surely implicates the molecular structure of waters’ foundational elements, and so forth. In consequence, theories about how a law is implemented usually draw on or upon the vocabularies of two, or more levels of explanation.
If you are specially interested in the peculiarities of aggregates of matter at the Lth level (in plants, or minds, or mountains, as it might be) then you are likely to be specially interested in implementing mechanisms at the L - 1th level (the ‘immediately’ mechanisms): This is because the characteristics of L-level laws can often be explained by the characteristics of their L - 1th level implementations. You can learn a lot about plants qua plants by studying their chemical composition. You learn correspondingly less by studying their subatomic constituents, though, no doubt, laws about plants are implemented, eventually, sub-atomically. The question thus arises of what mechanisms might immediately implement the intentional laws of psychology with that accounting for their characteristic features.
Intentional laws subsume causal interactions among mental processes, that much is truistic. But, in this context, something substantive, something that a theory of the implementation of intentional laws will account for. The causal processes that intentional states enter into have a tendency to preserve their semantic properties. For example, thinking true thoughts are so, that an inclining inclination to casse one to think more thoughts that are also true. This is not small matter: The very rationality of thought depends on such fact, in that ewe can consider or place them for interpretations as that true thoughts that ((P ➞ Q) and (P)) makes receptive to cause true thought that ‘Q’.
A good deal has happened in psychology - notably since the Viennese founder of psychoanalysis, Sigmund Freud (1856-1939) - has consisted of finding new and surprising cases where mental processes are semantically coherent under intentional characterizations. Freud made his reputation by showing that this was true even much of the detritus of behaviours, dreams, verbal slips and the like, even to free or word association and ink-blob coloured identification cards (the Rorschach test). Even so, it turns out, the psychology of normal mental processes is largely a grist for the same normative intention. For example, it turns out to be theoretically revealing to construe perceptual [processes as inferences that take specifications of proximal stimulations as premises and yield specifications, and that are reliably truth preserving in ecologically normal circumstances. The psychology of learning cries out for analogous treatment, e.g., for treatment as a process of hypothesis formation and ratifying confirmation.
Intentional states, as or common-sense understands them, have both causal and semantic properties and that the combination appears to be unprecedented: Propositions are semantically evaluable, but they are abstract objects and have no casual powers. Onions are concrete particulars and have casual powers, however, they are not semantically evaluable. Intentional states seem to be unique in combining the two that is what so many philosophers have against them.
Suppose, once, again, that ‘the cat is on the mat’. On the one hand, the thing as stated about the cat on the mat, is a concrete particular in good standing and it has, qua material object, an open-ended galaxy of causal powers. (It reflects light in ways that are essential to its legibility; It exerts a small but in particular detectable gravitational effect upon the moon, and whatever. On the other hand, what stands concrete is about something and is therefore semantically evaluable: It’s true if and only if there is a cat where it says that there is. So, then, the inscription of ‘the cat is on the mat,’ has both content and causal powers, and so does my thought that the cat is on the mat.
At this point, we are asked of how many words are there in the sentence. ‘The cat is on the mat’? There are, of course, at least two answers to this question, precisely because one can either count word types, of which there are five, or individual occurrences - known as tokens - of which there are six. Moreover, depending on how one chooses to think of word types, another answer is possible. Since the sentence contains definite articles, noun, a proposition and a verb, there are four grammatically different types of word in the sentence.
The type/token distinction, understood as a distinction between sorts of thing and instances, is commonly applied to mental phenomena. For example, one can think of pain in the type way as when we say that we have experienced burning pain many times: Or, in the token way, as when we speak of the burning pain currently being suffered. The type/token distinction for mental states and events becomes important in the context of attempts to describe the relationship between mental and physical phenomena. In particular, the identity theory asserts that mental states are physical states, and this raises the question whether the identity in question is of types or tokens.
Appreciably, if mental states are identical with physical states, presumably the relevant physical states are various sorts of neural state. Our concept of mental states such as thinking, sensing, and feeling and, and, of course, are different from our concepts of neural states, of whatever sort. Still, that is no problem for the identity theory. As J.J. Smart (1962) who first argued for the identity theory, and, emphasizes the requisite identity does not depend on our concepts of mental states or the meaning of mental terminology. For ‘a’ to be identical with ‘b’, both ‘a’ and ‘b’ must have exactly the same properties, however, the terms ‘a’ and ‘b’ need not mean the same. The principle of the indiscernibility of identical states that if ‘a’ is identical with ‘b’. Then every property that ‘a’ has ‘b’ has, and vice versa. This is sometimes known as Leibniz’s law.
However, the problem does seem to arise about the properties of mental states. Suppose pain is identical with a certain firing of c-fibres. Although a particular pain is the very same state as a neural firing, we identify that state in two different ways: As a pain and as a neural firing. The state will therefore have certain properties in virtue of which we identify it as neural firing, the properties in virtue of which we identify it as a pain will be mental properties, whereas those in virtue of which we identify it as a neural firing will be physical properties. This has seemed to many to lead to a kind of duality, at which the level of the properties of mental states. Even so, if we reject a dualism of substances and take people simply to be physical organisms, those organisms still have both mental and physical states.
The problem just sketched about mental properties is widely thought to be most pressing for sensations, since the painful quality of pains and the red quality of visualization in sensations that seem to be irretrievably non-physical. So even if mental states are all identicals with physical states, these states appear to have properties that are not physical. And if mental states do actually have non-physical properties, the identity of mental with physical states would not sustain the thoroughgoing mind-body materialism.
A more sophisticated reply to the difficultly about mental properties is due independently to the forth-right Australian ‘materialist and together with J.J.C. Smart, the leading Australian philosophers of the second half of the twentieth century. D.M. Armstrong (1926-) and the American philosopher David Lewis (1941-2002), who argue that for a state to be a particular sort of intentional state or sensation is for that state to bear characteristic causal relations to other particular occurrences. The properties in virtue of which we identify states as thoughts or sensations will still be neutral as between being mental or physical, since anything can bear a causal relations to anything else. But causal connections have a better chance than simplify in some unspecified respect of capturing the distinguishing properties of sensations and thoughts.
Early identity theorists insisted that the identity between mental and bodily events was contingent, meaning simply that the relevant identity statements were not conceptual truths. That leaves open the question of whether such identities would be necessarily true on other construals of necessity.
American logician and philosopher, Saul Aaron Kripke (1940-) made his early reputation as a logical prodigy, especially through the work on the completeness of systems of modal logic. The three classic papers are ‘A Completeness Theorem in Modal Logic’ (1959, ‘Journal of Symbolic Logic’) ‘Semantical Analysis of Modal Logic’ (1963, Zeltschrift fur Mathematische Logik und Grundlagen der Mathematik) and ‘Semantical Considerations on Modal Logic (1963, Acta Philosohica Fennica). In Naming and Necessity’ (1980), Kripke gave the classic modern treatment of the topic of reference, both clarifying the distinction between names and ‘definite descriptions, and opening the door to many subsequent attempts to understand the notion of reference in terms of a causal link between the use of a term and an original episode of attaching a name to a subject. His Wittgenstein on Rules and Private Language (1983) also proved seminal, putting the rule-following considerations at the centre of Wittgenstein studies, and arguing that the private language argument is an application of them. Kripke has also written influential work on the theory of truth and the solution of the ‘semantic paradoxes’.
Nonetheless, Kripke (1980) has argued that such identities would have to be necessarily true if they were true at all. Some terms refer to things contingently, in that those terms would have referred to different things had circumstances been relevantly different. Kripke’s example is ‘The first Post-master General of the us of A, which, in a different situation, would have referred to somebody other than Benjamin Franklin. Kripke calls these terms non-rigid designators. Other terms refer to things necessarily, since no circumstances are possible in which they would refer to anything else, these terms are rigid designators.
If the term ‘a’ and ‘b’ refer to the same thing and both determine that thing necessarily, the identity statement ‘a = b’ is necessarily true. Kripke maintains that the term ‘pain’ and the term for the various brain states all determine the states they refer to necessarily: No circumstances are possible in which these terms would refer to different things. So, if pain were identical d with some particular brain state. But be necessarily identical with that state. Yet, Kripke argues that pain cannot be necessarily identical with any brain state, since the tie between pains and brain states plainly seems contingent. He concludes that they cannot be identical at all.
Kripke notes that our intuition about whether an identity is contingent can mislead us. Heat is necessarily identical with mean molecular kinetic energy: No circumstances are possible in which they are not identical. Still, it may at first sight appear that heat could have been identical with some other phenomena, but it appears that this way, Kripke argues only because we pick out heat by our sensation of heat, which bears only a contingent-bonding to mean molecular kinetic energy. It is the sensation of heat that actually seems to be connected contingently with mean molecular kinetic energy, not with mean molecular kinetic energy, not the physical heat itself.
Kripke insists, however, that such reasoning cannot disarm our intuitive sense that pain is connected only contingently with brain states. This is, because for a state to be pain is necessity for it to be felt as pain, unlike heat, in the case of pain there is no difference between the state itself and how that state is felt, and intuitions about the one are perforce intuitions about the other one are perforce intuitions about the other.
Kripke’s assumption and the term ‘pain’ is open to question. As Lewis notes. One need not hold that ‘pain’ determines the same state in all possible situations indeed, the causal theory explicitly allows that it may not. And if it does not, it may be that pains and brain states are contingently identicals. But there is also a problem about some substantive assumption Kripke makes about the nature of pains, namely, those pains are necessarily felt as pains. First impression notwithstanding, there is reason to think not. There are times when we are not aware of our pains, for example, when we are suitably distracted, so the relationship between pains and our being aware of them may not be contingent after all, just as the relationship between physical heat and our sensation of heat is. And that would disarm the intuitions that pain is connected only contingently with brain states.
Kripke’s argument focuses on pains and other sensations, which, because they have qualitative properties, are frequently held to cause the greater of problems for the identity theory. The American moral and political theorist Thomas Nagel (1937-) traces to general difficulty for the identity theory to the consciousness of mental states. A mental state’s being conscious, he urges, means that there is something it is like to be in that state. And to understand that, we must adopt the point of view of the kind of creature that is in the state. But an account of something is objective, he insists, only insofar as it is independents of any particular type of point of view. Since consciousness is inextricably tied to points of view. No objective account of it is possible. And that means conscious states cannot be identical with bodily states.
The viewpoint of a creature is central to what that creature’s conscious states are like, because different kinds of crenatures have conscious states with different kinds of qualitative property. However, the qualitative properties of a creature’s conscious states depend, in an objective way, on that creature’s perceptual apparatus. we cannot always predict what anther creature’s conscious states are like, just as we cannot always extrapolate from microscopic to macroscopic properties, at least without having a suitable theory that covers those properties. But what a creature’s conscious states like depends in an objective way on its bodily endowment, which is itself objective. So, these considerations give us no reason to think that those conscious states are like is not also an objective matter.
If a sensation is not conscious, there is nothing its like to have it. So Nagel’s idea that what it is like to have sensations is central to their nature suggests that sensations cannot occur without being conscious. And that in turn, seems to threaten their objectivity. If sensations must be conscious. Perhaps they have no nature independently of how we ae aware of them, and thus no objective nature. Nonetheless, only conscious sensations seem to cause problems of the independent theory.
The notion of subjectivity, as Nagel again, see, is the notion of a point of view, what psychologists call a ‘constructionist theory of mind’. Undoubtedly, this notion is clearly tied to the notion of essential subjectivity. This kind of subjectivity is constituted by an awareness of the world’s being experienced differently by different subjects of experience. (It is thus possible to see how the privacy of phenomenal experience might be easily confused with the kind of privacy inherent in a point of view.)
Point-of-view subjectivity seems to take time to develop. The developmental evidence suggests that even toddlers are abl e to understand others as being subjects of experience. For instance, as a very early age, we begin ascribing mental states to other things - generally, to those same things to which we ascribe ‘eating’. And at quite an early age we can say what others would see from where they are standing, we early on demonstrate an understanding that the information available is different from different perceiver. It is in these perceptual senses that we first ascribe the point-of view - subjectivity.
Nonetheless, some experiments seem to show that the point-of-view subjectivity then ascribes to others is limited. A popular, and influential series of experiments by Wimmer and Perner (1983) is usually taken to illustrate these limitations (though there are disagreements about the interpretations, as such.) Two children -Dick and Jane - watch as an experimenter puts a box of candy somewhere, such as in a cookie jar, which is opaque. Jane leaves the room. Dick is asked where Jane will look for the candies, and he correctly answers. ‘In the cookie jar’. The experimenter, in dick’s view, then takes the candy out of the cookie jar and puts it in another opaque place, a drawer, ay. When Dick is asked where to look for the candy, he says quite correctly. ‘In the drawer’. When asked where Jane will look for the candy when she returns. But Dick answers. ‘In the drawer’. Dick ascribes to Jane, not the point-of-view subjectivity she is likely ton have, but the one that fits the facts. Dick is unable to ascribe to Jane belief - his ascription is ‘reality driven - and his inability demonstrates that Dick does not as yet have a fully developed point-of-view subjectivity.
At around the age of four, children in Dick’s position do ascribe the like point-of-view subjectivity to children in Jane’s position (“Jane will look in the cookie jar’): But, even so, a fully developed notion of a point-of-view subjectivity is not yet attained. Suppose that Dick and Jane are shown a dog under a tree, but only Dick is shown the dog’s arriving there by chasing a boy up the tree. If Dick is asked to describe, what Jane, who he knows not to have seen the dog under the tree. Dick will display a more fully developed point-of-view subjectivity only those description will not entail the preliminaries that only he witnessed. It turns out that four-year-olds are restricted by the age’s limitation, however, only when children are six to seven do they succeed.
Yet, even when successful in these cases’ children’s point-of-view subjectivity is reality-driven. Ascribing a point-of-view, subjectivity to others is still in terms relative to information available. Only in our teens do we seem capable of understanding that others can view the world differently from ourselves, even when given access to the same information. Only then do we seem to become aware of the subjectivity of the knowing procedure itself: Interring the ‘facts’ can be coloured by one’s knowing procedure and history. There are no ‘merely’ objective facts.
Thus, there is evidence that we ascribe a more and more subjective point of view to others: from the point-of-view subjectivity we ascribe being completely reality-drive, to the possibility that others have insufficient information, to their having merely different information, and finally, to their understanding the same information differently. This developmental picture seems insufficient familiar to philosophers - and yet well worth our thinking about and critically evaluating.
The following questions all need answering. Does the apparent fact that our point-of-view subjectivity ascribed to others develop over time, becoming more and more of the ‘private’ notions, shed any light on the sort of subjectivity we ascribe to our own self? Do our self-ascriptions of subjectivity themselves become more and more ‘private’, metre and more removed both from the subjectivity of others and from the objective world? If so, what is the philosophical importance of these facts? At the last, this developmental history shows that disentangling our self from the world we live in is a complicate matter.
Based in the fundament of reasonableness, it seems plausibility that we share of our inherented perception of the world, that ‘self-realization as ‘actualized’ of an ‘undivided whole’, drudgingly we march through the corpses to times generations in that we are founded of the last two decades. Here we have been of a period of extraordinary change, especially in psychology. Cognitive psychology, which focuses on higher mental processes like reasoning, decision masking, problem solving, language processing and higher-level visual processing, has become - perhaps - the dominant paradigm among experimental psychologists, while behaviouristically oriented approaches have gradually fallen into disfavour. Largely as a result of this paradigm shift, the level of interaction between the disciplines of philosophy and psychology has increased dramatically.
Nevertheless, developmental psychology was for a time dominated by the ideas of the Swiss psychologist and pioneer of learning theory, Jean Piaget (1896-1980), whose primary concern was a theory of cognitive developments (his own term was ‘genetic epistemology). What is more, like modern-day cognitive psychologists, Piaget was interested in the mental representations and processes that underlie cognitive skills. However, Piaget’s genetic epistemology y never co-existed happily with cognitive psychology, though Piaget’s idea that reasoning is based in an internalized version of predicate calculus has influenced research into adult thinking and reasoning. One reason for the lack of declining side by side interactions between genetic epistemology and cognitive psychology was that, as cognitive psychology began to attain prominence, developmental psychologists were starting to question Piaget’s ideas. Many of his empirical claims about the abilities, or more accurately the inabilities, of children of various ages were discovered to be contaminated by his unorthodox, and in retrospect unsatisfactory, empirical methods. And many of his theoretical ideas were seen to be vague, or uninterpretable, or inconsistent, however.
More than one of the central goals of thee philosophy of science is to provide explicit and systematic accounts of the theories and explanatory strategies s exploited in th e sciences. Another common goal is to construct philosophically illuminating analysis or explanations of central theoretical concepts invoked in one or another science. In the philosophy of biology, for example, there is a rich literature aimed at understanding teleological explanations, and there has been a great deal of work on the structure of evolutionary theory on the structure of evolutionary theory and on such crucial concepts as fitness and biological function. The philosophy of physics is another are a in which studies of this sort have been actively pursued. In undertaking this work, philosophers need not (and typically do not) assume that there is anything wrong with the science the y are studying. Their goal simply to provide e accounts of the theories, concepts, and explanatory strategies that scientists are using - accounts th at are more explicit, systematic and philosophically sophisticated that an offered rather rough-and-ready accounts offered by scientists themselves.
Cognitive psychology is in many was a curious and puzzling science. Many of the theorists put forward by cognitive psychologists make use of a family of ‘intentional’ concepts - like believing that ‘p’, desiring that ‘q’, and representing ‘r’ - which do not appear in the physical or biological sciences, and these intentional concepts play a crucial role in many of the explanations offered by these theories.
If a person ‘X’ thinks that ‘p’, desires that ‘p’, believes that ‘p’. Is angry at ‘p’ and so forth, then he or she is described as having a propositional attitude to ‘p?’. The term suggests that these aspects of mental life are well thought of in terms of a relation to a ‘proposition’ and this is not universally agreeing. It suggests that knowing what someone believes, and so on, is a matter of identifying an abstract object of their thought, than understanding his or her orientation towards more worldly objects.
Once, again, the directness or ‘aboutness’ of many, if not all, conscious states have side by side their summing ‘intentionality’. The term was used by the scholastics, but belief thoughts, wishes, dreams, and desires are about things. Equally, we use to express these beliefs and other mental states are about things. The problem of intentionality is that of understanding the relation obtaining between a mental state, or its expression, and the things it is about. A number of peculiarities attend this relation. First, If I am in some relation to a chair, for instance by sitting on it, then both it and I am in some relation to a chair, that is, by sitting on it, then both it and I must exist. But while mostly one thinks about things that exist, sometimes (although this way of putting it has its problems) one has beliefs, hopes, and fears about things that do not, as when the child expects Santa Claus, and the adult fears snakes. Secondly, if I sit on the chair, and the chair is the oldest antique chair in all of Toronto, then I am on the oldest antique chair in the city of Toronto. But if I plan to avoid the mad axeman, and the mad axeman is in fact my friendly postal-carrier. I do not therefore plan to avoid my friendly postal-carrier. The extension of such is the predicate, is the class of objects that is described: The extension of ‘red’ is the class of red things. The intension is the principle under which it picks them out, or in other words the condition a thing must satisfy to be truly described by the predicate. Two predicates (‘ . . . is a rational animal . ‘. . . is a naturally feathered biped) might pick out the same class but they do so by a different condition? If the notions are extended to other items, then the extension of a sentence is its truth-value, and its intension a thought or proposition: And the extension of a singular term is the object referred to by it, if it so refers, and its intension is the concept by means of which the object is picked out. A sentence puts a predicate on other predicate or term with the same extension can be substituted without it being possible that the truth-value changes: If John is a rational animal and we substitute the coexistence ‘is a naturally feathered biped’, then ‘John is a naturally featherless biped’, other context, such as ‘Mary believes that John is a rational animal’, may not allow the substitution, and are called ‘intensional context’.`
What remains of a distinction between the context into which referring expressions can be put. A contest is referentially transparent if any two terms referring to the same thing can be substituted in a ‘salva veritate’, i.e., without altering the truth or falsity of what is aid. A context is referentially opaque when this is not so. Thus, if the number of the planets is nine, then the number of planets is odd, and has the same truth-value as ‘nine is odd’: Whereas, ‘necessarily the number of planets is odd’ or ‘x knows that the number of planets is odd’ need not have the same truth-value as ‘necessarily nine is odd have the same truth-value as ‘necessarily nine in odd’ or ‘x knows that nine is odd’. So while’ . . . in odd’ provides a transparent context, ‘necessarily . . . is odd’ and ‘x knows that . . . is odd’ do not.
Here, in a point, is the view that the terms in which we think of some area are sufficiently infected with error for it be better to abandon them than to continue to try to give coherence theories of their use. Eliminativism should be distinguished from scepticism which claims that we cannot know the truth about some area: Eliminativism claims that there is no truth there to be known, in the terms with which we currently think. An eliminativist about theology simply councils abandoning the terms or discourse of theology, and that will include abandoning worries about the extent of theological knowledge. Eliminativist in the philosophy of mind council abandoning the whole network of terms mind, consciousness’ self, Qualia that usher in the problems of mind and body. Sometimes the argument for doing this is that we should wait for a supposed future e understanding of ourselves, based on cognitive science and better than our current mental descriptions provide, something it is supposed that physicalism shows that no mental description could possibly be true.
It seems, nonetheless, that of a widespread view that either the concept is indispensable, we must either declare seriously that science be that it cannot deal with the central feature of the mind or explain how serious science may include intentionality. One approach in which we communicate fears and beliefs have a two-faced aspect, involving both the object referred to, and the mod e of presentation under which they are thought of. we can see the mind as essentially directed onto existent things, and extensionally relate to them. Intentionality then becomes a feature of language, than a metaphysical or ontological peculiarity of the mental world.
While cognitive psychologists occasionally say a bit about the nature of intentional concepts and the explanations that exploit them, their comments are rarely systematic or philosophically illuminating. Thus, it is hardly surprising that many philosophers have seen cognitive psychology as fertile ground for the sort of careful descriptive work that is done in the philosophy of biology and the philosophy of physics. Jerry Fodor’s ‘Language of Thought’ (1975) was a pioneering study in this genre, one that continues to have a major impact on the field.
The relation between language and thought is philosophy’s chicken-or-egg problem. Language and thought are evidently importantly related, but how exactly are they related? Does language come first and make thought possible or vice versa? Or are they counter-balanced and parallel with each making the other possible?
When the question is stated this of such generality, however, no unqualified answer is possible. In some respect language is prior, in other respects thought is prior. For example, it is arguable that a language is an abstract pairing of expressions and meanings, a function, in the set-theatric sense, in that, this makes sense of the fact that Esperanto is a language no one speaks, and it explains why it is that, while it is a contingent fact that ‘La neige est blanche’ means that snow is white among the French speaking peoples. It is a necessary truth that it means that in French and English are abstract objects in this sense, then they exist whether or not anyone speaks them: They even exist in possible worlds in which there are no thinkers. In this respect, then, language, as well as such notions as meaning and truth in a language, is prior to thought.
But even if languages are construed as abstractive expression-meaning pairing, they are construed what was as abstractions from actual linguistic practice - from the use of language in linguistic communicative behaviour - and there remains a clear sense in which language is dependent on thought. The sequence of marks, ‘Point Peelie is the most southern point of Canada’s geographical boundaries’, means among us that Point Peelie is the most southern lactation that hosts thousands of migrating species. Had our linguistic practice been different, Point Peelie is a home for migrating species and an attraction of hundreds of tourists, that in fact, that the province of Ontario is a home and a legionary resting point for thousands of migrating species, have nothing at all among us. Plainly means that Point Peelie is Canada’s most southern location in bordering between Canada and the Unites State of America. Nonetheless, Point Peelie is special to Canada has something to do with the belief and intentions underlying our use of words and structure that compose the sentence of Canada’s most southern point and yet nearest point in bordering of the United States. More generally, it is a platitude that the semantic features that marks and sounds have a population of tourist and migrating species are at least partly determined by the attitudinal values for which this is the platitude, of course, which says that meaning depends, partially, on the use in communicative behaviours. So, here, is one clear sense in which language is dependent on thought: Thought is required to imbue marks and sounds with the somantic features they have as to host of populations.
The sense in which language does depend on thought can be wedded to the sense in which language does not depend on thought in the following way. we can say, that a sequence of marks or sounds (or, whatever) ‘ς’ means ‘q’ in a language ‘L’, construed as a function from expressions onto meaning, iff L(ς) = q. This notion of meaning-in-a-language, like the notion of a language, is a mere set-theoretic notion that is independent of thought in that it presupposes nothing about the propositional attitude of language users: ‘ς’ can mean ‘q’ in ‘L’ even if ‘L’ has never very been used? But then, we can say that ‘ς’ also means ‘q’ in a population ‘P’. The question of moment then becomes: What relation must a population ‘P’ bear to a language ‘L’ in order for it to be the case that ‘L’ is a language of ‘P’, a language member s of ‘P’ actually speak? In whatever the answer to this question is, this much seems right: In order for a language to be a language of a population of speakers, those speakers must produce sentences of the language in their communicative behaviour. Since such behaviour is intentional, we know that the notion of a language’s being the language of a population of speakers presupposes the notion of thought. And since that notion presupposes the notion of thought, we also know that the same is true of the correct account of the semantic features expression have in populations of speakers.
This is a pretty thin result, not on likely to be disputed, and the difficult question remain. we know that there is some relation ’R’ such that a adaptive ‘L’ is used by a population ‘P’ iff ‘L’ bears ‘R’ to ‘P’. Let us call this reflation, whatever it turns out to be, the ‘actual-language relation’. we know that to explain the semantic features expressions have among those who are apt to produce those expressions, and we know that any account of the relation must require language users to have certain propositional attitudes. But how exactly is the actual language relation to be explained in terms of the propositional attitudes of language users? And what sort of dependence might those propositional attitude in turn have on language or on the semantic factures that are fixed by the actual-language relation? Further, what of the relation of language to thought, before turning to the relation of thought to language.
All must agree that the actual-language relation, and with it the semantic features linguistic items have among speakers, is at least, partly determined by the propositional attitudes of language users. This, however, leaves plenty of room for philosophers to disagree both about the extent of the determination and the nature of the determining propositional attitude. At one end of the determination spectrum, we have those who hold that the actual-language relation is wholly definable in terms on non-semantic propositional attitudes. This position in logical space is most taken as occupied by the programme, sometimes called intention-based semantics, of the English philosopher of language Paul Herbert Grice (1913-1988), introducing the important concept of an ‘implicature’ into the philosophy of language, arguing that not everything that is said is direct evidence for the meaning of some term, since many factors my determine the appropriateness of remarks independently of whether they are actually true. The point, however, undermines excessive attention to the niceties in conversation as reliable indicators of meaning, a methodology characteristic of ‘linguistic philosophy’. In a number of elegant papers which identities is with a complex of sentences which it is uttered. The psychological is thus used to explain the semantic, and the question of whether this is the correct priority has prompted considerable subsequent discussion.
The foundational notion in this enterprise is a certain notion of ‘speaker-semantics’. It is the species of communicative behaviour reported when we say, for example, that in uttering ‘II pleut’. Pierre meant that it was raining, or that in waving her hand, the Queen meant that you were to leave the room. Intention-based semantics seeks to define this notion of speaker meaning wholly in terms of communicators’ audience-directed intentions and without recourse to any semantic notions. Then it seeks to define the actual-language relation in terms of the now-defined notion of speaker meaning, together with certain ancillary notions such as that of a conventional regularity or practice, themselves defined wholly in terms of non-semantic propositional attitudes. The definition in terms of speaker meaning of other agent-semantic notions, such as the notions of an illocutionary act, and this, is part of the intention-based semantics programme.
Some philosophers object to intention-based semantics because they think it precludes a dependence of thought on the communicative use of language. This is a mistake, in that if intention-based semantics definitions are given a strong reductionist reading, as saying that public-language semantic properties (i.e., those semantic properties that supervene on use in communicative behaviour) just are psychological properties, it might still be that one could not have propositional attitudes unless one had mastery of a public-language, insofar as the concept of supervenience has seen increasing service in philosophy of mind. The thesis that the mental is supervenient on the physical - roughly, the claim that the mental character of a thing is wholly determine d by its physical nature - has played a key role in the formulation of some influential positions on the mind-bod y problem. In particular, versions of non-reductive physicalism. Mind-body supervenience has also been invoked in arguments for or against certain specific claims about the mental, and has been used to devise solutions to some central problems about the mind - for example, the problem of mental causation - such that the psychological level of description carries with it a mode of explanation which ‘has no echo in physical theory’.
The ‘content as to infer about mental events, states or processes with content include seeing that the door is shut: Believing you are being followed, and calculating the square root of 2. What centrally distinguishes states, events, or processes - are basic to simply being states - with content is that they involve reference to objects, properties or relations. A mental state with content can fail to refer, but there always exists a specific condition for a state with content to refer to certain things. When the state has a correctness or fulfilment condition, its correctness is determined by whether its referents have the properties the content specifies for them. It leaves open the possibility that unconscious states, as well as conscious states, have content. It equally allows the states identified by an empirical, computational psychology to have content. A correct philosophical understanding of this general notion of content is fundamental not only to the philosophy of mind and psychology, but also to the theory of knowledge and to metaphysics.
There is a long-standing tradition that emphasizes that the reason-giving relation is a logical or conceptual one. One way of bringing out the nature of this conceptual link is by the construction of reasoning, linking the agent’s reason-providing states with the states for which they provide reasons. This reasoning is easiest to reconstruct in the case of reason for belief where the contents of the reason-providing beliefs inductively or deductively support the content of the rationalized belief. For example, I believe my colleague is in her room now, and my reasons are (1) she usually has a meeting in her room at 9:30 on Mondays and (2) it is to accept it as true, and it is relative to the objective of reaching truth that the rationalizing relations between contents are set for belief. They must be such that the truth of the premises makes likely the truth of the conclusion.
The causal explanatorial approach to reason-giving explanations also requires an account of the intentional content of our psychological states, which makes it possible for such content to be doing such work. It also provides a motivation for the reduction of intentional characterization as to extensional ones, in an attempt to fit such intentional causality into a fundamentally materialist world picture. The very nature of the reason-giving relation, however, can be seen to render such reductive projects unrealizable. This, therefore, leaves causal theorists with the task of linking intentional and non-intentional levels of description in such a way as to accommodate intentional causality, without either over-determination or a miraculous coincidence of prediction from within distinct causally explanatorial frameworks.
The idea that mentality is physically realized is integral to the ‘functionalist’ conception of mentality, and this commits most functionalists to mind-body supervenience in one form or another. As a theory of mind, supervenience of the mental - in the form of strong supervenience, or at least global supervenience - is arguably a minimum commitment of physicalism. But can we think of the thesis of mind-body supervenience itself as a theory of the mind-body relation - that is, as a solution to the mind-body problem?
A supervenience claim consists of covariance and a claim of dependence e (leaving aside the controversial claim of non-reducibility). This means that the thesis th at the mental supervenience on the physical amounts to the conjunction of the two claims (1) strong or global supervenience, and (2) the mental depends on the physical. However, the fact that the thesis says nothing about just what kind of dependence is involved in mind-body supervenience. When you compare the supervenience thesis with the standard positions on the mind-body problem, you are struck by what the supervenience thesis does not say. For each of the classic mind-body theories has something to say, not necessarily anything veery plausible, about the kind of dependence that characterizes the mind-body relationship. According to epiphenomenalism, for example, the dependence is one of causal dependence is one of casual dependence: On logical behaviourism, dependence is rooted in meaning dependence, or definability: On the standard type physicalism, the dependence is one that is involved in the dependence of macro-properties and son forth. Even Wilhelm Gottfried Leibniz (1646-1716) and Nicolas Malebranche (1638-1715) had something to say about this: The observed property convariation is due not to a direct dependancy relation between mind and body but rather to divine plans and interventions. That is, mind-body convariation was explained in terms of their dependence on a third factor - a sort of ‘common cause’ explanation.
It would seem that any serious theory addressing the mind-body problem must say something illuminating about the nature of psychophysical dependence, or why, contrary to common belief, there is no dependence. However, there is reason to think that ‘supervenient dependence’ does not signify a special type of dependence reflation. This is evident when we reflect on the varieties of ways in which we could explain the supervenience relation holds in a given case. For example, consider the supervenience of the moral on the descriptive the ethical naturalist will explain this on the basis of definability: The ethical intuitionist will say that the supervenience, and also the dependence, seems the brute fact that you discern through moral intuition. And the prescriptivist will attribute the supervenience to some form of consistency requirement on the language of evaluating and prescription. And distinct from all of these is mereological supervenience, namely the supervenience of properties of a whole on properties and relations of its parts. What all this shows is that there is no single type of dependence relation common to all cases of supervenience: Supervenience holds in different cases for different reasons, and does not represent a type of dependence that can be put alongside causal dependence, meaning dependence, mereological dependence and so forth.
If this is right, the supervenience thesis concerning the mental does not constitute an explanatory account of the mind-body relation, on a par with the classic alternatives on the mind-body problem. It is merely the claim that the mental covaried in a systematic way with the physical, an that this is due to a certain dependence relation yet to be specified and explained. In this sense, the supervenience thesis states the mind-bod y problem than offering a solution to it.
There seems to be a promising strategy for turning the supervenience thesis into a more substantive theory of mind, and it is this: To explicate mind-body supervenience as a special case of mereological supervenience - that is, the dependence of the properties of a whole on the properties and relations characterizing its proper parts. Mereological dependence does seem to be a special form of dependence that is metaphysical and highly important. If one takes this approach, one would have to explain psychological properties as macroproperties of a whole organism that covary, in appropriate ways, with its microproperties, i.e., the way its constituents, tissue, and do on, are organized and function. This more specific supervenience thesis may well be a serious theory of the mind-body relation that can compete with the classic options in the field.
Previously, our considerations had fallen to arrange in making progress in the betterment of an understanding, fixed on or upon the alternatives as to be taken, accepted or adopted, even to bring into being by mental or physical selection, among alternates that generally are in agreement. These are minded in the reappearance of confronting or agreeing with solutions precedently recognized. That is of saying, whether or not this is plausible (that is a separate question), it would be no more logically puzzling than the idea that one could not have any propositional attitude unless one had one’s with certain sorts of contents. Tyler Burge’s insight is partly determined by the meanings of one’s words in one’s linguistic community. Burge (1979) is perfectly consistent with any intention-based semantics, reduction of the semantic to the psychological. Nevertheless, there is reason to be sceptical of the intention-based semantic programme. First, no intention-based semantic theorist has succeeded in stating a sufficient condition for more difficult task of starting a necessary-and-sufficient condition. And is a plausible explanation of this failure is that what typically makes an utterance an act of speaker meaning is the speaker’s intention to be meaning or saying something, where the concept of meaning or saying used in the content of the intention is irreducibly semantic. Second, whether or not an intention-based semantic way of accounting for the actual-language relation in terms of speaker meaning. The essence of the intention-based semantic approach is that sentences used as conventional devices for making known a speaker’s communicative understanding is an inferential process wherein a hearer perceives an utterance and, thanks to being party to relevant conventions or practices, infers the speaker’s communicative intentions. Yet it appears that this inferential model is subject to insuperable epistemological difficulties, and. Third, there is no pressing reason to think that the semantic needs to be definable in terms of the psychological. Many intention-based semantic theorists have been motivated by a strong version of physicalism which requires the reduction of all intentional properties (i.e., all semantic and propositional-attitude properties) to physical or at least topic-neutral, or functional, properties, for it is plausible that there could be no reduction to the semantic and the psychological to the physical without a prior reduction of the semantic to the psychological. But it is arguable that such a strong version of physicalism is not what is required in order to fit the intentional into the natural order.
What is more, in the dependence of thought on language for which this claim is that propositional attitudes are relations to linguistic items which obtain, at least, partially, by virtue of the content those items have among language users. Thus, position does not imply that believers have to be language users, but it does make language an essential ingredient in the concept of belief. The position is motivated by two considerations (a) The supposition that believing is a relation to things that believing is a relation to things believed, for which of things have truth values and stand in logical relations to one another, and (b) The desires not to take things believed to be propositions - abstract things believed to be propositions - abstract, mind - and essentially the truth conditions that have. Now the tenet )a) is well motivated: The relational construal of propositional attitude s is probably the best way to account for the quantitative in ‘Harvey believes something nasty about you’. But there are probable mistakes with taking linguistic items, rather than propositions, as the objects of belief In the first place, If Harvey believes that Flounders snore’ is represented along the lines that of (‘Harvey, but flounder snore’), then one could know the truth expressed by the sentience about Harvey without knowing the content of his beliefs: For one could know that he stands in the belief relation to ‘flounders snore’ without knowing its content. This is unacceptable, as in the second place, if Harvey believes that flounders snore, then what he believes that flounders snore, then what he believes - the reference of ‘that flounders snore’ - is that flounders snore. But what is this thing that flounders snore? well, it is abstract, in that it has no spatial location. It is mind and language independent, in that it exists in possible worlds for which there are neither thinkers nor speakers: and, necessarily, it is true if flounders snore. In short, it is a proposition - an abstract mind, and language-independent thing that has a truth condition and has essentially the truth condition it has.
A more plausible way that thought depend s on language is suggested b y the topical thesis that we think in a ‘language of thought’. On one reading, this is nothing more than the vague idea that the neural states that realize our thoughts ‘have elements and structure in a way that is analogous to the way in which sentences have elements and structure’. Nonetheless, we can get a more literal rendering by relating it to the abstract conception of languages already recommended. On this conception, a language is a function from ‘expressions’ - sequences of marks or sounds or neural states or whatever - onto meaning, for which meanings will include the propositions of our propositional altitudes relations relate us to. we could then read the language of though t hypothesis as the claim that having propositional altitudes require s standing in a certain relation to a language whose expressions are neural state. There would now be more than one ‘actualized-language relations. The one earlier of mention, the one discussed earlier might be better called the ‘public-language relation’. Since the abstract notion of a language ha been so weakly construed. It is hard to see how the minimal language-of-thought proposal just sketched could fail to be true. At the same time, it has been given no interesting work to do. In trying to give it more interesting work, further dependencies of thought on language might come into play. For example, it has been claimed that the language of thought of a claim that the language of thought of a public-language user is the public language she uses: Her neural sentences are related to her spoken and written sentences in something like the way the written sentences are related to her spoken sentences. For another example, I that it might be claimed that even if one’s language of thought is something like the way her written sentences are related to he r spoken sentences. For example, it might be claimed that even if one’s language of thought is distinct from one’s public language, the language -of thought relations makes presuppositions about the public-language relations in way that make the content of one’s words in one’s public language community.
Tyler Burge, has in fact shown that there is a sense for which though t content is dependent on the meanings of words in one’s linguistic communications. Alfred’s use of ‘arthritis’ is fairly standard, except that he is under the misconception that arthritis is not confined to the joints, he also applies the word to rheumatoid ailments not in the joints. Noticing an ailment in his thigh that is symptomatically like the disease in his hands and ankles, he says, to his doctor, ‘I have arthritis in the thigh’: Here Alfred is expressing his false belief that he has arthritis in the thigh. But now consider a counter-factual situation that differs in just one respect (and, whatever it entails): Alfred’s use of ‘arthritis’ is the correct use in his linguistic community. In this situation, Alfred would be expressing a true belief when he says ’I have arthritis in the thigh’. Since the proposition he believes is true while the proposition that he has arthritis in the thigh is false, he believes some other proposition. This shows that standing in the belief relation to a proposition can be partly determined by the meanings of words on one’s public language. The Burge phenomenon seems real, but it would be nice to have a deep explanation of why thought content should be dependent on language in this way.
Finally, there is the old question of whether, or to what extent, a creature who does not understand a natural language can have thoughts. Now it seems pretty compelling that higher mammals and humans raised without language have their behaviour controlled by mental state that are sufficiently like our beliefs, desires, and intentions to share those labels. It also seems easy to imagine non-communicating creatures who have sophisticated mental lives (the yy build weapons, dams, bridges, have clever hunting devices, and so on). At the same time, ascription of particular contents to non-language-using creatures typically seem exercises in loose speaking (does the dog really believe that there is a bone in the yard?), and it is no accident that, as a matter of fact, creatures who do not understand a natural language have at best primitive mental lives. There is no accepting the primitive mental lives of animals account for their failure to master natural language, but the better explanation may be Chomsky’s faculty unique to our species. As regards the inevitably primitive mental life of another wise normal humans raised without language, this might simply be due to the ignorance and lack of intellectual stimulation such a person would be doomed to. On the other hand, it might also be that higher thought requirements of a neural language with structures comparable to that of a natural language, and that such neural language ss are somehow acquired as the ascription of content to the propositional-attitude states of language less creatures is a difficult topic that needs more attention. It is possible of our ascriptions of propositional content, we will realize that these ascriptions are egocentrically based on a similarity to the language in which we express our beliefs. we might then learn that we have no principled basis for ascribing propositional content to a creature who does not speak something, or who does not have internal states with natural-language-like structure. It is somewhat surprising how little we know about thought’s dependence on language.
The Language of Thought hypothesis has a compelling neatness about it. A thought is depicted as a structure of internal representational elements combined in a lawful way, and plays a certain functional role in an internal processing economy. So that the functionalist thinks of mental states and events as causally mediating between a subject’s sensory inputs and that subjects ensuing behaviour. Functionalism itself is the stronger doctrine that what makes a mental state the type of state it is - a pain, a smell of violets, a belief that koalas are dangerous - is the functional relationist bears to the subject’s perceptual stimuli, behavioural responses, and other mental states.
The representational theory of the mind arises with the recognition that thoughts have contents carried by mental representations.
Nonetheless, theorists seeking to account for the mind’s activities have long sought analogues to the mind. In modern cognitive science, these analogues have provided the basses for simulation or modelling of cognitive performance seeing that cognitive psychology simulate one way of testings in a manner comparable to the mind, that offers support for the theory underlying the analogue upon which the simulation is based simulation, however, also serves a heuristic function, suggesting ways for which the mind might gainfully characteristically operate in physical terms. The problem is most obvious in the case of ‘arbitrary’ signs, like words, where it is clear that there is no connection between the physical properties of a word and what it denotes (the problem remains for Iconic representation). What kind of mental representation might support denotation and attribution if not linguistic representation? Perhaps, when thinking within the peculiarities that the mind and attributions thereof, being among the semantic properties of thoughts, are that ‘thoughts’ in having content, posses semantic properties, however, if thoughts denote and precisely attribute, sententialism may be best positioned to explain how this is possible.
Beliefs are true or false. If, as representationalism would have it, beliefs are relations to mental representations, then beliefs must be relations to representations that have truth values among their semantic properties. Beliefs serve a function within the mental economy. They play a central part in reasoning and, thereby, contribute to the control of behaviour. To be rational, a set of beliefs, desires, and actions, also perceptions, intentions, decisions, must fit together in various ways. If they do not, in the extreme case they fail to constitute a mind at all - no rationality, no agent. This core notion of rationality in philosophy of mind thus concerns a cluster of personal identity conditions. That is, ‘Holistic’ coherence requirements on or upon the system of elements comprising a person’s mind, related conception and epistemic or normative rationality are key linkages among the cognitive, as distinct rom qualitative mental stats. The main issue is characterizing these types of mental coherence.
Closely related to thought’s systematicity is its productivity to have a virtual unbounded competence to think ever more complex novel thoughts having certain clear semantic ties to their less complex predecessor. Systems of mental representation apparently exhibit mental representation apparently exhibit the sort of productivity distinctive of spoken languages. Sententialism accommodates this fact by identifying the productive system of mental representation with a language of thought, the basic terms of which are subject to a productive grammar.
Possibly, in reasoning mental representations stand to one another just as do public sentences in valid ‘formal derivations’. Reasoning would then preserve truth of belief by being the manipulation of truth-valued sentential representations according to rules so selectively sensitive to the syntactic properties of the representations as to respect and preserve their semantic properties. The sententialist hypothesis is thus that reasoning is formal inference. It is a process tuned primarily to the structure of mental sentences. Reasoners, then, are things very much like classical programmed computers. Thinking, according to sententialism, may then be like quoting. To quote an English sentence is to issue, in a certain way, a token of a given English sentence type: It is certainly not similarly to issue a token of every semantically equivalent type. Perhaps, thought is much the same. If to think is to token, a sentence in the language of thought, the sheer tokening of one mental sentence need not insure the tokening of another formally distinct equivalents, hence, thought’s opacity.
Objections to the language of thought come from various quarters. Some will not tolerate any edition of representationalism, including Sententialism: Others endorse representationalism while denying that mental representations could involve anything like a language. Representationalism is launched by the assumption that psychological stat es ae relational, that being in psychological state minimally involves being related to something. But perhaps, psychological states are not at all relational. Verbalism begins by denying that expressions of psychological states are relational, infers that psychological states themselves are monadic and, thereby, opposes classical versions of representationalism, including sententialism.
What all this is supposed to show, was that Chomsky and advances in computer science, the 1960s saw a rebirth of ‘mentalistic’ or ‘cognitivist’ approaches to psychology and the study of mind.
These philosophical accounts o cognitive theories and the concepts they invoke are generally much more explicit than the accounts provided by psychologists, and they inevitably smooth over some of the rough edges of scientists’ actual practice. But if the account they give of cognitive theories diverges significantly from the theories that psychologists have just gotten it wrong. There is, however, a very different way in which philosophers have approached cognitive psychology. Rather than merely trying to characterize what cognitive psychology is actually doing, some philosophers try to say what it should and should not be doing. Their goal is not to explicate scientific practice, but to criticize and improve it. The most common target of this critical approach is the use of intentional concepts in cognitive psychology. Intentional notions have been criticized on various grounds. The two taken for our considerations are that they fail to supervene on the physiology of the cognitive agent, and that they cannot be ‘naturalized’.
Perhaps, to an approach that is mos radical is the proposal that cognitive psychology should recast its theories and explanations in a way that does not appeal to intentional properties or ‘syntactic’ properties. Somewhat less radical is the suggestion that we can define a species of representation, which does supervene an organism’s physiology, and that psychological explanations that appeal to ordinary (‘wide’) intentional properties can be replaced by explanations that invoke only their narrow counterparts. Nonetheless, many philosophers have urged that the problem lies in the argument, not in the way that cognitive psychology might be modified. However, many philosophers have urged that the problem lis in the argument, not in the way that cognitive psychology goes about its business. The most common critique of the argument focuses on the normative premise - the one that insists that psychological explanations ought not to appeal to ‘wide’ properties that fail to supervene on physiology. Why should it bot be that psychological explanations appeal to wide properties, the critics ask?: What exactly is wrong with psychological explanations invoking properties that do not supervene on physiology? Various answers have been proposed in the literature, though they typically end up invoking metaphysical principles that are less clear and less plausible than the normative thesis they are supposed to support.
Given to any psychological property that fails to supervene on physiology, it is trivial to characterize a narrow correlated property that does supervene. The extension of the correlate property includes all actual and possible objects in the extension of the original property, plus all actual and possible physiological duplicates of those objects. Theories originally stated in terms of wide psychological properties sated in terms of wide psychological properties can be recast in terms of their descriptive or explanatory power. It might be protested that when characterized in this way, narrow belief and narrow content are not really species of belief and content at all. Nevertheless, it is far from clear how this claim could be defended, or why we should care if it turns out to be right.
The worry about the ‘naturalizability’ of intentional properties is much harder to pin down. According to Fodor, the worry derives from a certain ontological intuition: That there is no place for intentional categories in a physicalistic view of the world, and thus, that the semantic and/or intentionality will prove permanently recalcitrant to integration in the natural order. If, however, intentional properties cannot be integrated into the natural order, then presumably they ought to be banished from serious scientific theorizing. Psychology should have no truck with them. Indeed, if intentional properties have no place in the natural order, then nothing in the natural world has intentional properties, and intentional states do not exist at all. So goes the worry. Unfortunately, neither Fodor nor anyone else has said anything very helpful about what is required to ‘integrate’ intentional properties into the natural order. There are, to be sure, various proposals to be found in the literature. But all of them seem to suffer from a fatal defect. On each account of what is required to naturalize a property or integrate it into the natural order, there are lots of perfectly respectable non-intentional scientific or common-sense properties that fail to meet the standards. Thus, all the proposals that have been made so far, end up being declined and thrown out.
Now, or course, the fact that no one has been able to give a plausible account of what is required to ‘naturalize’ the intentional may indicate nothing more than that their project is a difficult one. Perhaps with further work a more plausible account will be forthcoming. But one might also offer a very different diagnosis of the failure of all accounts of ‘naturalizing’ that have so far been offered. Perhaps the ‘ontological intuition’ that underlies the worry about integrating the intentional into the natural order is simply muddled. Perhaps, there is no coherent criterion of naturalization or naturalizability that all properties invoked in respectable science must meet, as, perhaps, that this diagnosis is the right one. Until those who are worried about the naturalizability of the intentional provide us with some plausible account of what is required of intentional categories if they are to find a place in ‘a physicalistic view of the world’. Possibly we are justified in refusing to take their worry seriously.
Recently, John Searle (1992) has offered a new set of philosophical arguments aimed at showing that certain theories in cognitive psychology are profoundly wrong-headed. The theories that are the target of computational explanations of various psychological capacities - like the capacity to recognize grammatical sentences, or the capacity to judge which of two objects in one ‘s visual field is further away. Typically, these theories are set out in the form of a computer program - a set of rules for manipulating symbols - and the explanations offered for the exercise of the capacity in question is that people’s brains are executing the program. The central claim in Searle’ s critique is that being a symbol or a computational stat e is not an ‘intrinsic’ physical feature of a computer state or a brain state. Rather, being a symbol is an ‘observer relative’ feature. However, Searle maintains, only intrinsic properties of a system can play a role in causal explanations of how they work. Thus, appeal to symbolic or computational states of the brain could not possibly play a role in a ‘casual account of cognition in knowledge’.
All of which, the above aforementioned surveyed, does so that implicate some of the philosophical arguments aimed at showing that cognitive psychology is confusing and in need of reform. My reaction to those arguments was none too sympathetic. In each case, it was maintained to the philological argument that is problematic, not the psychology it is criticizing.
It is fair to ask where we get the powerful inner code whose representational elements need only systematic construction to express, for example, the thought that cyclotrons are bigger and more than vast than black holes. Nonetheless, on this matter, the language of thought theorist has little to say. All that concept learning could be, assuming it is to be some kind of rational process and not due to mere physical maturation or a bump on the head. According to the language of thought theorist, is the trying out of combinations of existing representational elements to see if a given combination captures the sense (as evidenced in its use) of some new concept. The consequence is that concept learning, conceived as the expansion of our representational resources, simply does not happen. What happens instead is that we work with a fixed, innate repertoire of elements whose combination and construction must express any content we an ever learn to understand. And note that it is not the trivial claim that in some sense the resources a system starts with must set limits on what knowledge it can acquire. For these are limits which flow not, for example, from sheer physical size, number of neurons, connectivity of neurons, and so forth. But from a base class of genuinely representational elements. They are more like the limits that being restricted to the propositional calculus would place on the expressive power of a system than, say, the limits that having a certain amount of available memory storage would place on one.
But this picture of representational stasis in which all change consists in the redeployment of existing representational resources, is one that is fundamentally alien to much influential theorizing in developmental psychology. The prime example of a developmentalist who believed in a much stronger formsa much stronger form in genuine expansion of representational power at the very heart of a model of human development. In a similar vein, recent work in the field of connectivism seems to open up the possibility of putting well-specified models of strong representational change back into the centre of cognitive scientific endeavours.
Nonetheless, the understanding of how the underlying combinatoric code ‘develops’ the deep understanding of cognitive processes, than understanding the structure and use of the code itself (though, doubtless the projects would need to be pursued hand-in-hand).
The language of thought depicts thoughts as structures of concepts, for which in turn exist as elements (for any basic concept) or concatenations of elements (for the rest) in the inner code. The intentional states, as common-sense understands them, have both causal and semantic properties and that the combination appears to be unprecedented. However, a further problem about inferential role semantics is that it is, almost invariably, suicidally holistic. it seems, that, if externalism is right, then (some of) the intentional properties of thought are essentially ‘extrinsic’: They essentially involve mind-to-world relations. All and all, in assuming that the computational role of a mental representation is determined entirely by its intrinsic properties, such properties of its weigh t, shape, or electrical conductivity as it might be., hard to see how the extrinsic properties: Which is to say, that it is hard to see how there could be computationally sufficient conditions for being in an intentional state, for which is to say that it is hard to see how the immediate implementation of intentional laws could be computational.
However, there is little to be said about intrinsic relation s between basic representational items. Even bracketing the (difficult) question of which, if any words in our public language may express content s which have as their vehicles atomic items in the language of thought (an empirical question on which it is to assume that Fodor to be officially agnostic), the question of semantic relations between atomic items in the language of thought remains. Are there any such relations? And if so, in what do they consist? Two thought s are depicted as semantically related just in casse they share elements themselves (like the words of public language on which they are modelled) seem to stand in splendid isolation from one another. An advantage of some connectionist approaches lies precisely in their ability to address questions of the interrelation of basic representational elements (in act, activation vectors) by representing such items as location s in a kind of semantic space. In such a space related contents are always expressed by related representational elements. The connectionist’s conception of significant structure thus goes much deeper than the Fodorian’s. For the connectionist representations need never be arbitrary. Even the most basic representational items will bear non-accidental relations of similarity and difference to one another. The Fodorian, having reached representational bedrock, must explicitly construct any such further relations. They do not come for free as a consequence ee of using an integrated representational space. Whether this is a bad thing or a goo one will depend, of course, on what kind of facts we need to explain. But it is to suspect that representational atomism may turn out to be a conceptual economy that a science of the mind cannot afford.
The approach for ascribing contents must deal with the point that it seems metaphysically possible for here to be something that in actual and counterfactual circumstances behaves as if it enjoys states with content, when in fact it does not. If the possibility is not denied, this approach must add at least that the states with content causally interact in various ways with one-another, and also causally produce intentional action. For most causal theories, however, the radical separation of the causal and rationalizing role of reason-giving explanations is unsatisfactory. For such theorists, where we can legitimately point to an agent’s reasons to explain a certain belief or action, then those features of the agent’s intentional states that render the belief or action reasonable must be causally relevant in explaining how the agent came to believe or act in a way which they rationalize. One way of putting this requirement is that reason-giving states not only cause, but also causally explain their explananda.
On most accounts of causation an acceptance of the causal explanatory role of reason-giving connections requires empirical causal laws employing intentional vocabulary. It is arguments against the possibility of such laws that have, however, been fundamental for those opposing a causal explanatorial view of reasons. What is centrally at issue in these debates is the status of the generalizations linking intentional states to each other, and to ensuing intentional acts. An example of such a generalization would be, ‘If a person desires ‘X’, believes ‘A’ would be a way of promoting ‘X’, is able to ‘A’ and has no conflicting desires than she will do ‘A’. For many theorists such generalizations are between desire, belief and action. Grasping the truth of such a generalization is required to grasp the nature of the intentional states concerned. For some theorists the a priori elements within such generalization s as empirical laws. That, however, seems too quick, for it would similarly rule out any generalizations in the physical sciences that contain a priori elements, as a consequence of the implicit definition of their theoretical kinds in a causal explanation theory. Causal theorists, including functionalist in philosophy of mind, can claim that it is just such implicit definition that accounts for th a priori status of our intentional generalizations.
The causal explanatory approach to reason-giving explanations also requires an account of the intentional content of our psychological states, which makes it possible for such content to be doing such work. It also provides a motivation for the reduction of intentional characteristics to extensional ones, on an attempt to fit intentional causality into a fundamentally materialist world picture. The very nature of the reason-giving relation, however, can be seen to render such reductive projects unrealizable. This, therefore leaves causal theorists with the task of linking intentional and non-intentional levels of description in such a way as to accommodate intentional causality, without either over-determination or a miraculous coincidence of prediction from within distinct causally explanatorial frameworks.
The existence of such causal links could well be written into the minimal core of rational transitions required for the ascription of the contents in question. Yet, it is one thing to agree that the ascription of content involves a species of rational intelligibility. It is another to provide an explanation of this fact. There are competing explanations. One treatment regards rational intelligibility as ultimately dependent on or upon what we find intelligible, or on what we could come to find intelligible in suitable circumstances. This is an analogue of classical treatments of secondary qualities, and as such is a form of subjectivism about content. An alternative position regards the particular conditions for correct ascription of given contents as more fundamental. This alternative states that interpretation must respect these particular conditions. In the case of conceptual contents, this alternative could be developed in tandem with the view that concepts are individuated by the conditions for possessing them. These possession conditions would then function as constraints upon correct interpretation. If such a theorist also assigns references to concepts in such a way that the minimal rational transitions are also always truth-preserving, he will also have succeeded in explaining why such transitions are correct. Under an approach that treats conditions for attribution as fundamental, intelligibility need not be treated as a subjective property. There may be concepts we could never grasp because of our intellectual limitations, as there will be concepts that members of other species could not grasp. Such concepts have their possession conditions, but some thinkers could not satisfy those conditions.
Ascribing states with content to an actual person has to proceed simultaneously with attribution of a wide range of non-rational states and capacities. In general, we cannot understand a person’s reasons for acting as he does without knowing the array of emotions and sensations to which he is subject: What he remembers and what he forgets, and how he reasons beyond the confines of minimal rationality. Even the content -involving perceptual states, which play a fundamental role in individuating content, cannot be understood purely in terms relating to minimal rationality. A perception of the world as being a certain way is not (and could not be) under a subject’s rational control. Though it is true and important that perceptions give reasons for forming beliefs, the beliefs for which they fundamentally provide reasons - observational beliefs about the environment - have contents which can only be elucidated by referencing back to perceptual experience. In this respect (as in others) perceptual states differ from those beliefs and desires that are individuated by mentioning what they provide reasons for judging or doing: For frequently these latter judgements and actions can be individuated without reference back to the states that provide reasons for them.
What is the significance for theories of content of the fact that it is almost certainly adaptive for members of a species to have a system of states with representational contents which are capable of influencing their actions appropriately? According to teleological theories of content, a constitutive account of content - one which says what it is for a state to have a given content - must make use of the notion of natural function and teleology. The intuitive idea is that for a belief state to have a given content ‘p’ is for the belief-forming mechanisms which produced it to have the function b(perhaps derivatively) of producing that state only when it is the case that ‘p’. One issue this approach must tackle is whether it is really capable of associating with states the classical, realistic, verification-transcendent contents which pre-theoretically, we attribute to hem. It is not clear that a content’s holding unknowably can influence the replication of belief-forming mechanics. Bu t even if content itself proves to resist elucidation in terms of natural function and selection. It is still a very attractive view that selection must be mentioned in an account of what associate ss something - such as sentence - with a particular content, even though that content itself may be individuated by other means.
Contents are normally specified by ‘that . . . ‘ clauses, and it is natural to suppose that a content has the same kind of sequential and hierarchical structure as the sentence that specifies it. This supposition would be widely accepted for conceptual content. It is, however, a substantive thesis that all content is conceptual . One way of treating one sort of perceptual content is to regard the content as determined by a spatial type, the type under which the region of space around the perceiver’s must fall if the experience with that content is to represent the environment correctly. The type involves a specification of surfaces and features in the environment, and their distances are directed from the perceiver’s body as origin. Such contents lack any sentence-like structure at all. Supporters of the view that all content is conceptual will argue that the legitimacy of using these spatial type in giving the content of experience does not undermine the thesis that all content is conceptual. Such supporters will say that the spatial type is just a way of capturing what can equally be captured by conceptual components such as ‘that distance’, or ‘that direction’, where these demonstratives are made available by the perception in question. Friends of non-conceptual content will respond that these demonstratives themselves cannot be elucidated without mentioning the spatial types for which lack sentence-like structure.
The actions made rational by content-involving states are actions individuated in part by reference to the agent’s relations to things and properties in his environment. Wanting to see a particular movie and believing that, that building over thee is a cinema showing it makes rational the action of walking in the direction of that building. Similarly, for the fundamental casse of a subject who has knowledge about his environment, a crucial factor in making rational the formations of particular attitude is the way the world is around him. One may expect, the n, that any theory that links the attribution of contents to states with rational intelligibility will be commit to the thesis that the content of a person’s states depends in part on his relations to the world outside him. We call this thesis the thesis of externalism about content.
Externalism about content should steer a middle course. On the one had, it should not ignore the truism that the relations of rational intelligibility involve not things and properties in the world, but the way they are presented as being - an externalist should use some version of Frége’s notion of mode of presentation. On the other hand, the externalist for whom considerations of rational intelligibility are pertinent to the individuation of content is likely to insist that we cannot dispense with the notion of something in the world - being presented in a certain way. If we dispense with the notion of something external bing presented in a certain way, we are in danger of regarding attributions of content as having no consequence for how an individual relates to his environment, in a way that is quite contrary to our intuitive understanding of rational intelligibility.
Externalism comes in more and fewer extreme versions. Consider a mind of a thinker who sees or perceives of a particular pear, and thinks a thought that the pear is ripe, where the demonstrative way of thinking of the pear expressed by ‘that pear’ is made available to him by his perceiving the pear. Some philosophers have held that the thinker would be employed of thinking were he perceiving a different perceptually based way of thinking were he perceiving a different pear. But externalism need not be committed to this. In the perceptual state that makes available the way on thinking pear is presented as being in a particular distance, and as having certain properties. A position will still be externalist if it holds that what is involved in the pear’s being so presented is the collective role of these components of content in making intelligible in various circumstances the subject’s relations to environmental directions distance and properties of object. This can be held without committed to the object-dependence of the way of thinking expressed by ‘that pear’. This less strenuous form of externalism must, though, address the epistemological arguments offered in favour of the more extreme versions, to the effect that only they are sufficiently world-involving.
The apparent dependence of the content of belief on factors external to the subject can be formulated as a failure of supervenience of belief content upon facts about what is the case within the boundaries of the subject’s body. To claim that such supervenience fails is to make a model claim: That there can be two persons the same in respect of their internal physical states (and so in respect to those of their dispositions that are independent of content-involving states), who nevertheless differ in respect of which beliefs they have. Hilary Putnam (1926-), the American philosopher of science, who became more prominent in his writing about ‘Reason, Truth, and History’ (1981) marked of a subtle position that he call’s internal realism, initially related to a n ideal limit theory of truth, and apparently maintaining affinities with verificationism, but in subsequent work more closely aligned with minimalism. Putnam’s concern in the later period has largely been to deny any serious asymmetry between truth and knowledge as obtained in moral s, and even theology.
Nonetheless, in the case of content-involving perceptual states. It is a much more delicate matter to argue for the failure of supervenience. The fundamental reason for this is answerable not only to factors ion the input side - what in certain fundamental cases causing the subject to be in the perceptual state - but also to factors on the perceptual state - but also to factors on the output side - what the perceptual state is capable of helping to explain amongst the subject’s actions. If differences in perceptual content always involve differences in bodily-described actions in suitable counter-factual circumstances, and if these different actions always will after all be supervenience of content-involving perceptual states on internal states. But if this should turn ut to be so, that is not a refutation of externalism for perceptual contents. A different reaction to this situation of dependence ads one of supervenience is in some cases too strong. A better is given by a constitutive claim: That what makes a state have the content it does are certain of its complex relations to external states of affairs. This can be held without commitment to the model separability of certain internal states from content-involving perceptual states.
Attractive as externalism about content ma be, it has been vigorously contested notably by the American philosopher of mind Jerry Alan Fodor (1935-), who is known for a resolute realism about the nature of mental functioning. Taking the analogy between thought and computation seriously, Fodor believes that mental representations should be conceived as individual states with their own identities and structure, like formulae transformed by processes of computation or thought. His views are frequently contrasted with those of ‘Holist’ such as Herbert Donald Davidson (1917-2003), although Davidson is a defender of the doctrines of the ‘indeterminacy’ of radical translation and the ‘inscrutability’ of reference, his approach has seemed to many to offer some hope of identifying meaning as a respectable notion, even within a broadly ‘extensional’ approach to language. Davidson is also known for rejection of the idea of a ‘conceptual scheme’, thought of as something peculiar to one language or in one way of looking at the world, arguing that where the possibility of translation stops so does the coherence of the idea that there is anything to translate. Nevertheless, Fodor (1981) endorses the importance of explanation by content-involving states, but holds that content must be narrow, constituted by internal properties of an individual.
One influential motivation for narrow content is a doctrine about explanation that molecule-for-molecule counter-parts must have the same causal powers. Externalists have replied that the attributions of content-involving states presuppose some normal background or context for the subject of the states, and that content-involving explanations commonly take the presupposed background for granted. Molecular counter-parts can have different presuppose d backgrounds, and their content-involving states may correspondingly differ. Presupposition of a background of external relations in which something stands is found in other sciences outside those that employ the notion of content, including astronomy and geology.
A more specific concern of those sympathetic to narrow content is that when content is externally individuated, the explanatorial principles postulated in which content-involving states feature will be a priori in some way that is illegitimate. For instance, it appears to be a priori that behaviour is intentional under some description involving the concept ‘water’ will be explained by mental states that have the externally individuated concept about ‘water’ in their content. The externalist about content will have a twofold response. First, explanations in which content-involving states are implicated will also include explanations of the subject’s standing in a particular relation to the stuff water itself, and for many such relations, it is in no way a priori that the thinker’s so standing has a psychological explanation at all. Some such cases will be fundamental to the ascription of externalist content on treatments that tie such content to the rational intelligibility of actions relationally characterized. Second, there are other cases in which the identification of a theoretically postulated state in terms of its relations generates a priori truths, quite consistently with that state playing a role in explanation. It arguably is phenotypical characteristic, then it plays a causal role in the production of that characteristic in members of the species in question. Far from being incompatible with a claim about explanation, the characterization of genes that would make this a priori also requires genes to have a certain casual explanatory role.
Of anything, it is the friend of narrow content who has difficulty accommodating the nature content are fit to explain bodily movements in environment-involving terms. But we note, that the characteristic explananda of content-involving states, such as walking towards the cinema, are characterized in environment-involving terms. How is the theorist of narrow content to accommodate this fact? He may say, that we merely need to add a description of the context of the bodily movement, which ensures that the movement is in fact a movement toward the cinema. But mental property of an event to an explanation of that event does not give one an explanation of the event’s having that environmental property, let alone a content-involving explanation of the fact. The bodily movement may also be a walking in the direction of Moscow, but it does not follow that we have a rationally intelligible explanation of the event as a walking in the direction of Moscow. Perhaps the theorist of narrow content would at this point add further relational proprieties of the internal states of such a kind that when his explanation is fully supplemented, it sustains the same counter-factuals and predications as does the explanation that mentions externally individuated content. But such a fully supplemented explanation is not really in competition with the externalist’s account. It begins to appear that if such extensive supplementation is adequate to capture the relational explananda it is also sufficient to ensure that the subject is in states with externally individuated contents. This problem, however, affects not only treatments of content as narrow, but any attempt to reduce explanation by content-involving states to explanation by neurophysiological states.
One of the tasks of a sub-personal computational psychology is to explain how individuals come to have beliefs, desires, perceptions and other personal-level content-involving properties. If the content of personal-level states is externally individuated, then the contents mentioned in the sub-personal psychology that is explanatory of those personal states must also be externally individuated. One cannot fully explain the presence of an externally individuated state by citing only states that are internally individuated. On an externalist conception of sub-personal psychology, a content-involving computation commonly consists in the explanation of some externally individuated states by other externally individuated states.
This view of sub-personal content has, though, to be reconciled with the fact that the first states in an organism involved in the explanation - retinal states in the case of humans - are not externally individuated. The reconciliation is affected by the presupposed normal background, whose importance to the understanding of content we have already emphasized. An internally individuated state, when taken together with a presupposed external background, can explain the occurrence of an externally individuated state.
An externalist approach to sub-personal content also has the virtue of providing a satisfying explanation of why certain personal-level states are reliably correct in normal circumstances. If the sub-personal computations that cause the subject to be in such states are reliably correct, and the final commutation is of the content of the personal-level state, then the personal-level state will be reliably correct. A similar point applies to reliable errors, too, of course. In either case, the attribution of correctness condition to the sub-personal state is essentially to the explanation.
Externalism generates its own set of issues that need resolution, notably in the epistemology of attributions. A content-involving state may be externally individuated, but a thinker does not need to check on his relations to his environment to know the content of his beliefs, desires, and perceptions. How can this be? A thinker’s judgements about his beliefs are rationally responsive to his own conscious beliefs. It is a first step to note that a thinker’s beliefs about his own beliefs will then inherit certain sensitivities to his environment that are present in his original (first-order) beliefs. But this is only the first step, for many important questions remain. How can there be conscious externally individuated states at all? Is it legitimate to infer from the content of one’s states to certain general facts about one’s environment, and if so, how, and under what circumstances?
Ascription of attitudes to others also needs further work on the externalist treatment. In order knowledgeably to ascribe a particular content-involving attitude to another person, we certainly do not need to have explicit knowledge e of the external relations required for correct attribution of the attitude. How then do we manage it? Do we have tacit knowledge of the relation on which content depends, or do we in some way take our own case as primary, and think of the relations as whatever underlies certain of our own content-involving states? In the latter, in what wider view of other-ascription should this point be embedded? Resolution of these issues, like so much else in the theory of content, should provide us with some understanding of the conception each one has of himself as one mind amongst many, interacting with a common world which provides the anchor for the ascription of content.
There seems to have the quality of being an understandably comprehensive characteristic as ‘thought’, attributes the features of ‘intentionality’ or ‘content’: In thinking, as one thinks about certain things, and one thinks certain things about those things - one entertains propositions that maintain a position as promptly categorized for the states of affairs. Nearly all the interesting properties of thoughts depend upon their ‘content’: Their being coherent or incoherent, disturbing or reassuring, revolutionary or banal, connected logically or illogically to other thoughts. It is thus, hard to see why we would bother to talk of thought at all unless we were also prepared to recognize the intentionality of thought. So we are naturally curious about the nature of content: We want to understands what makes it possible, what constitutes it, what it stems from. To have a theory of thought is to have a theory of its content.
Four issues have dominated recent thinking about the content of thought, each may be construed as a question about what thought depends on, and about the consequences of its so depending (or not depending). These potential dependencies concern: (1) The world outside of the thinker himself, (2) language, (3) logical truth (4) consciousness. In each casse the question is whether intentionality is essentially or accidentally related to the items mentioned: Does it exist, that is, only by courtesy of the dependence of thought on the aid items? And this question determining what the intrinsic nature of thought is.
Thoughts are obviously about things in the world, but it is a further question whether they could exist and have the content they do whether or not their putative objects themselves exist. Is what I think intrinsically dependent on or upon the world in which I happen to think it? This question was given impetus and definition by a thought experiment due to Hilary Putnam, concerning a planet called ‘twin earth’. On twin earth there live thinkers who are duplicates of us in all internal respects but whose surrounding environment contain different kinds of natural objects. The suggestion then is that what these thinkers refer to and think about is individuality dependent upon their actual environment, so that where we think about cats when we say ‘cat’ they think about that word - the different species that actually sits on their mats and so on. The key point is that since it is impossible to individuate natural kinds like cats solely by reference to the way they strike the people who think about them cannot be a function simply of internal properties of the thinker. The content, here, is relational in nature, is fixed by external facts as they bear upon the thinker. Much the same point can be made by considering repeated demonstrative reference to distinct particular objects: What I refer to when I say ‘that bomb’, of different bombs, depends on or upon the particular bomb in front of me and cannot be deduced from what is going on inside me. Context contributes to content.
Inspired by such examples, many philosophers have adopted an ‘externalist’ view of thought content: Thoughts are not antonymous states of the individual, capable of transcending the contingent facts of the surrounding world. One is therefore not free to think whatever one’s liking, as it was, whether or not the world beyond cooperates in containing suitable referents for those thoughts. And this conclusion has generated a number of consequential questions. Can we know our thoughts with special authority, given that they are thus hostage to external circumstances? How do thoughts cause other thoughts and behaviour, given that they are not identical with an internal states we are in? What kind of explanation are we giving when we cite thoughts? Can there be a science of thought if content does not generalize across environments? These questions have received many different answers, and, of course, not everyone agrees that thought has the kind of world-dependence claimed. Nonetheless, what has not been considered carefully enough, is the scope of the externalist thesis - whether it applies to all forms of thought, all concepts. For unless this questions be answered affirmatively we cannot rule out the possibility that though in general depends on there being some thought that is purely internally determined, so that the externally fixed thoughts are a secondary phenomenon. What about thoughts concerning one’s present sensory experience, or logical thoughts or ethical thought? Could there, indeed, be a thinker for whom internalism was generally correct? Is external individuation the rule or the exception? And might it take the rule or the exception? And might it take different forms in different cases?
Since words are also about things, it is natural to ask how their intentionality is connected to that of thoughts. Two views have been advocated: One view takes thought content to be self-subsisting relative to linguistic content, with the latter dependent upon the former: the other view takes thought comment to be derivative upon linguistic content, so that there can be no thought without a bedrock of language. Thus, arise controversies about whether animals really think, being non-speakers, or computers really use language., being non-thinkers. All such question depend critically upon what one is to mean by ‘language’. Some hold that spoken language is unnecessary for thought but that there must be an inner language in order for thought to be possible, while others reject the very idea of an inner language, preferring to suspend thought from outer speech. However, it is not entirely clear what it amounts to assert (or deny)that there is an inner language of thought. If it means merely that concepts (thought constituents) are structured in such a way as to be isomorphic with spoken language, then the claim is trivially true, given some natural assumptions. But if it means that concepts just are ‘syntactic’ items orchestrated into springs of the same, then the claim is acceptable only in so far as syntax is an adequate basis for meaning - which, on the face of it, it is not. Concepts no doubt have combinatorial powers compactable to those of words, but the question is whether anything else can plausible be meant by the hypothesis of an inner language.
On the other hand, it appears undeniable that spoken language does have autonomous intentionality, but instead derives its meaning from the thought of speakers - though language may augment one’s conceptual capacities. So thought cannot postdate spoken language. The truth seems to be that in human psychology speech and thought are interdependent in many ways, but there is no conceptual necessity about this. The only ‘language’ on which thought essentially depends itself: Thought indeed, depends upon there being insoluble concepts that can join with others to produce complete propositional statements. But this is merely to draw attention to a property any system of concepts must have: It is not to say what concepts are or how they succeed in moving between thoughts as they so. Appeals to language at this point, are apt to flounder on circularity, since words take on the power of concepts only insofar as they express them. Thus, there seems little philosophical illumination to be got from making thought depend on or upon language.
This third dependency question is prompted by the reflection that, while people are no doubt often irrational, woefully so, there seems to be sme kind of intrinsic limit to their unreason. Even the sloppiest thinker will not infer anything from anything. To do so is a sign of madness The question then is what grounds this apparent concession to logical prescription. Whereby, the hold of logic over thought? For the dependence there can seem puzzling: Why should the natural causal processes relations of logic, I am free to flout the moral law to any degree I desire, but my freedom to think unreasonably appears to encounter an obstacle in the requirement of logic? My thoughts are sensitive to logical truth in somewhat the way they are sensitive to the world surrounding me: They have not the independence of what lies outside my will or self that I fondly imagined. I may try to reason contrary to modus ponens, but my efforts will be systematically frustrated. Pure logic takes possession of my reasoning processes and steers them according to its own indicates, variably, of course, but in a systematic way that seems perplexing.
One view of tis is that ascriptions of thought are not attempts to map a realm of independent causal relations, which might then conceivably come apart from logical relations, but are rather just a useful method of summing up people’s behaviours. Another view insists that we must acknowledge that thought is not a natural phenomenon in the way merely physical facts are: Thoughts are inherently normative in their nature, so that logical relations constitute their inner essence. Thought incorporates logic in somewhat the way externalists say it incorporates the world. Accordingly, the study of thought cannot be a natural science in the way the study of (say) chemistry compounds is. Whether this view is acceptable, depends upon whether we can make sense of the idea that transitions in nature, such as reasoning appear to be, can also be transitions in logical space, i.e., be confined by the structure of that space. What must be thought, in such that this combination n of features is possible. Put differently, what is it for logical truth to be self-evident?
This dependency question has been studied less intensively than the previous three. The question is whether intentionality ids dependent on or upon consciousness for its very existence, and if so why. Could our thoughts have the very content they now have if we were not to be consciousness beings at all? Unfortunately, it is difficult to see how to mount an argument in either direction. On one hand, it can hardly be an accident that our thoughts are conscious and that this content is reflected in the intrinsic condition of our state of consciousness: It is not ass if consciousness leaves off where thought content begins - as it does with, say, the neural basis of thought. Yet, on the other hand, it is by no means clear what it is about consciousness that links it to intentionality in this way. Much of the trouble here stems from our exceedingly poor understanding of the nature of consciousness could arise from grain tissue (the mind-body problem), so that we fill to grasp the manner in which conscious states bear meaning. Perhaps content is fixed by extra-conscious properties and relations and only subsequently shows up in consciousness, as various naturalistic reductive accounts would suggest; Or perhaps, consciousness itself plays a more enabling role, allowing meaning to come into the word, hard as this may be to penetrate. In some ways the question is analogous to, say, the properties of ‘pain’: Is the aversive property of pain, causing avoidance behaviour and so forth, essentially independent of the conscious state of feeling, or is it that pain, could only have its aversion function in virtue of the conscious feedings? This is part of the more general question of the epiphenomenal character of consciousness: Is conscious awareness just a dispensable accompaniment of some mental feature - such as content or causal power - or is it that consciousness is structurally involved in the very determination of the feature? It is only too easy to feel pulled in both directions on this question, neither alterative being utterly felicitous. Some theorists, suspect that our uncertainty over such questions stems from a constitutional limitation to human understanding. We just cannot develop the necessary theoretical tools which to provide answers to these questions, so we may not in principle be able to make any progress with the issue of whether thought depends upon consciousness and why. Certainly our present understanding falls far short of providing us with any clear route into the question.
It is extremely tempting to picture thought as some kind of inscription in a mental medium and of reasoning as a temporal sequence of such inscriptions. On this picture all that a particulars thought requires in order to exist is that the medium in question should be impressed with the right inscription. This makes thought independent of anything else. On some views the medium is conceived as consciousness itself, so that thought depends on consciousness as writing depends on paper and ink. But ever since Wittgenstein wrote, we have seen that this conception of thought has to be mistaken, in particular of intentionality. The definitive characteristics of thought cannot be captured within this model. Thus, it cannot make room for the idea of intrinsic world-dependence. Since any inner inscription would be individualatively independent of items outside the putative medium of thought. Nor can it be made to square with the dependence of thought on logical pattens, since the medium could be configured in any way permitted by its intrinsic nature, within regard for logical truth - as sentences can be written down in any old order one likes. And it misconstrues the relation between thought and consciousness, since content cannot consist in marks on the surface of consciousness, so to speak. States of consciousness do contain particular meanings but not as a page contains sentences: The medium conception of the relation between content and consciousness is thus deeply mistaken. The only way to make meaning enter internally into consciousness is to deny that it as a medium for meaning to be expressed. However, it is marked and noted as the difficulty to form an adequate conception of how consciousness does carry content - one puzzle being how the external determinants of content find their way into the fabric of consciousness.
Only the alleged dependence of thought upon language fits the naïve tempting inscriptional picture, but as we have attested to, this idea tends to crumble under examination. The indicated conclusion seems to be that we simply do not posses a conception of thought that makes its real nature theoretically comprehensible: Which is to say, that we have no adequate conception of mind? Once we form a conception of thought that makes it seem unmysterious as with the inscriptional picture. It turns out to have no room for content as it presents itself: While building in a content as it is leaves’ us with no clear picture of what could have such content. Thought is ‘real’, then, if and only if it is mysterious.
In the philosophy of mind ‘epiphenomenalism’ means that while there exist mental events, states of consciousness, and experience, they have themselves no causal powers, and produce no effect on the physical world. The analogy sometimes used is that of the whistle on the engine, that makes the sound (corresponding to experiences), but plays no part in making the machinery move. Epiphenomenalism is a drastic solution to the major difficulties the existence of mind with the fact that according to physics itself only a physical event can cause another physical event an epiphenomenalism may accept one-way causation, whereby physical events produce mental events, or may prefer some kind of parallelism, avoiding causation either between mind and body or between body and mind. And yet, occasionalism considers the view that reserves causal efficacy to the action of God. Events in the world merely form occasions on which God acts so as to bring about the events normally accompanying them, and thought of as their effects. Although, the position is associated especially with the French Cartesian philosopher Nicolas Malebranche (1638-1715), inheriting the Cartesian view that pure sensation has no representative power, and so adds the doctrine that knowledge of objects requires other representative ideas that are somehow surrogates for external objects. These are archetypes of ideas of objects as they exist in the mind of God, so that ‘we see all things in God’. In the philosophy of mind, the difficulty to seeing how mind and body can interact suggests that we ought instead to think of hem as two systems running in parallel. When I stub my toe, this does so cause pain, but there is a harmony between the mental and the physical (perhaps due yo God) that ensures that there will be a simultaneous pain, when I form an intention and then act, the same benevolence ensures that my action is appropriated to my intention. The theory has never been wildly popular, and many philosophers would say that it was the result of a misconceived ‘Cartesian dualism’. Nonetheless, a major problem for epiphenomenalism is that if mental events have no causal relationship it is not clear that they can be objects of memory, or even awareness.
The metaphor used by the founder of revolutionary communism, Karl Marx (1805-1900) and the German social philosopher and collaborator of Marx, Friedrich Engels (1820-95), to characterize the relation between the economic organization of society, which is its base, an the political, legal, and cultural organizations and social consciousness of a society, which is the super-structure. The sum total of the relations of production of material life conditions the social political, and intellectual life process in general. The way in which the base determines of much debate with writers from Engels onwards concerned to distance themselves from that the metaphor might suggest. It has also in production are not merely economic, but involve political and ideological relations. The view that all causal power is centred in the base, with everything in the super-structure merely epiphenomenal. Is sometimes called economicism? The problems are strikingly similar to those that are arisen when the mental is regarded as supervenience upon the physical, and it is then disputed whether this takes all causal power away from mental properties.
Just the same, for if, as the causal theory of action implies, intentional action requires that a desire for something and a belief about how to obtain what one desires play a causal role in producing behaviour, then, if epiphenomenalism is true, we cannot perform intentional actions. Nonetheless, in describing events that happen does not of itself permit us to talk of rationality and intention, which are the categories we may apply if we conceive of them as actions. Ewe think of ourselves not only passively, as creatures within which things happen, but actively, as creatures that make things happen. Understanding this distinction gives rise to major problems concerning the nature of agency, of the causation of bodily events by mental events, and of understanding the ‘will’ and ‘free will’. Other problems in the theory of action include drawing the distinction between the structures involved when we do one thing ‘by’ doing another thing. Even the placing and dating of action can give ruse to puzzles, as one day and in one place, and the victim then dies on another day and in another place. Where and when did the murder take place? The notion of applicability inherits all the problems of ‘intentionality’. The specific problems it raises include characterizing the difference between doing something accidentally and doing it intentionally. The suggestion that the difference lies in a preceding act of mind or volition is not very happy, since one may automatically do what is nevertheless intensional, for example, putting one’s foot forwards while walking. Conversely, unless the formation of a volition is intentional, and thus raises the same questions, the presence of a violation might be unintentional or beyond one’s control. Intentions are more finely grained than movements, one set of movements may both be answering the question and starting a war, yet the one may be intentional and the other not.
However, according to the traditional doctrine of epiphenomenalism, things are not as they seem: In reality, mental phenomena can have no causal effects: They are casually inert, causally impotent. Only physical phenomena are casually efficacious. Mental phenomena are caused by physical phenomena, but they cannot cause anything. In short, mental phenomena are epiphenomenal.
The epiphenomenalist claims that mental phenomena seem to be causes only because there are regularities that involve types (or kinds) of mental phenomena. For example, instances of a certain mental type ‘M’, e.g., trying to raise one’s arm might tend to be followed by instances of a physical type ‘P’, e.g., one’s arms rising. To infer that instances of ‘M’ tend to cause instances of ‘P’ would be, however, to commit the fallacy of post hoc, ergo propter hoc. Instances of ‘M’ cannot cause instances of ‘P’: Such causal transactions are casually impossible. P-typ e events tend to be followed by M-type events because instances of such events are dual-effects of common physical causes, not because such instances causally interact. Mental events and states can figure in the web of causal relations only as effects, never as causes.
Epiphenomenalism is a truly stunning doctrine. If it is true, then no pain could ever be a cause of our wincing, nor could something’s looking red to us ever be a cause of our thinking that it is red. A nagging headache could never be a cause of a bad mood. Moreover, if the causal theory of memory is correct, then, given epiphenomenalism, we could never remember our prior thoughts, or an emotion we once felt, or a toothache we once had, or having heard someone say something, or having seen something: For such mental states and events could not be causes of memories. Furthermore, epiphenomenalism is arguably incompatible with the possibility of intentional action. For if, s the casual theory of action implies, intentional action requires that a desire for something and a belief about how to obtain what one desires lay a causal role in producing behaviour, then, if epiphenomenalism is true, we cannot perform intentional actions. As it strands, to accommodate this point - most obviously, specifying the circumstances in which belief-desire explanations are to be deployed. However, matter are not as simple as the seem. Ion the functionalist theory, beliefs are casual functions from desires to action. This creates a problem, because all of the different modes of psychological explanation appeal to states that fulfill a similar causal function from desire to action. Of course, it is open to a defender of the functionalist approach to say that it is strictly called for beliefs, and not, for example, innate releasing mechanisms, that interact with desires in a way that generates actions. Nonetheless, this sort of response is of limited effectiveness unless some sort of reason-giving for distinguishing between a state of hunger and a desire for food. It is no use, in that it is simply to describe desires as functions from belief to actions.
Of course, to say the functionalist theory of belief needs to be expanded is not to say that it needs to be expanded along non-functionalist lines. Nothing that has been said out the possibility that a correct and adequate account of what distinguishes beliefs from non-intentional psychological states can be given purely in terms of respective functional roles. The core of the functionalist theory of self-reference is the thought that agents can have subjective beliefs that do not involve any internal representation of the self, linguistic or non-linguistic. It is in virtue of this that the functionalist theory claim to be able to dissolve such the paradox. The problem that has emerged, however, is that it remains unclear whether those putative subjective beliefs really are beliefs. Its thesis, according to which all cases of action to be explained in terms of belief-desire psychology have to be explained through the attribution of beliefs. The thesis is clearly at work as causally given to the utility conditions, and hence truth conditions, of the belief that causes the hungry creature facing food to eat what I in front of him - thus, determining the content of the belief to be. ‘There is food in front of me’, or ‘I am facing food’. The problem, however, is that it is not clear that this is warranted. Chances would explain by the animal would eat what is in front of it. Nonetheless, the animal of difference, does implicate different thoughts, only one of which is a purely directive genuine thought.
Now, the content of the belief that the functionalist theory demands that we ascribe to an animal facing food is ‘I am facing food now’ or ‘There is food in front of me now’. These are, it seems clear, structured thoughts, so too, for that matter, is the indexical thought ‘There is food here now’. The crucial point, however, is that the casual function from desires to actions, which, in itself, is all that a subjective belief is, would be equally well served by the unstructured thought ‘Food’.
At the heart of the reason-giving relation is a normative claim. An agent has a reason for believing, acting and so forth. If, given here to other psychological states this belief/action is justified or appropriate. Displaying someone’s reasons consist in making clear this justificatory link. Paradigmatically, the psychological states that prove an agent with logical states that provide an agent with treason are intentional states individuated in terms of their propositional content. There is a long tradition that emphasizes that the reason-giving relation is a logical or conceptual representation. In the case of reason for actions the premises of any reasoning are provided by intentional states other than belief.
Notice that we cannot then, assert that epiphenomenalism is true, if it is, since an assertion is an intentional speech act. Still further, if epiphenomenalism is true, then our sense that we are enabled is true, then our sense that we are agents who can act on our intentions and carry out our purposes is illusory. We are actually passive bystanders, never the agent in no relevant sense is what happens up to us. Our sense of partial causal control over our exert no causal control over even the direction of our attention. Finally, suppose that reasoning is a causal process. Then, if epiphenomenalism is true, we never reason: For there are no mental causal processes. While one thought may follow anther, one thought never leads to another. Indeed, while thoughts may occur, we do not engage in the activity of thinking. How, the, could we make inferences that commit the fallacy of post hoc, ergo propter hoc, or make any inferences at all for that matter?
As neurophysiological research began to develop in earnest during the latter half of the nineteenth century. It seemed to find no mental influence on what happens in the brain. While it was recognized that neurophysiological events do not by themselves casually determine other neurophysiological events, there seemed to be no ‘gaps’ in neurophysiological causal mechanisms that could be filled by mental occurrences. Neurophysiological appeared to have no need of the hypothesis that there are mental events. (Here and hereafter, unless indicated otherwise, ‘events’ in the broadest sense will include states as well as changes.) This ‘no gap’ line of argument led some theorists to deny that mental events have any casual effects. They reasoned as follows: If mental events have any effects, among their effects would be neurophysiological ones: Mental events have no neurophysiological effects: Thus, mental events have no effect at all. The relationship between mental phenomena and neurophysiological mechanisms is likened to that between the steam-whistle which accompanies the working of a locomotive engine and the mechanisms of the engine, just as the steam-whistle which accompanies the working of a locomotive engine and the mechanisms of the engine: just as the steam-whistle is an effect of the operations of the mechanisms but has no casual influence on those operations, so too mental phenomena are effects of the workings of neurophysiological mechanisms, but have no causal influence on their operations. (The analogy quickly breaks down, as steam whistles have casual effects but the epiphenomenalist alleges that mental phenomenons have no causal effects at all.)
An early response to this ‘no gap’ line of argument was that mental events (and states) are not changes in (and states of) an immaterial Cartesian substance e, they are, rather changes in (and states of) the brain. While mental properties or kinds are not neurophysiological properties or kinds, nevertheless, particular mental events are neurophysiological events. According to the view in question, a given events can be an instance of both a neurophysiological type and a mental type, and thus be both a mental event and a neurophysiological event. (Compare the fact that an object might be an instance of more than one kind of object: For example, an object might be both a stone and a paper-weight.) It was held, moreover, that mental events have causal effects because they are neurophysiological events with causal effects. This response presupposes that causation is an ‘extensional’ relation between particular events, that if two events are causally related, they are so related however they are typed (or described). Given that assumption is today widely held. And given that the causal relation is extensional, if particular mental events are indeed, neurophysiological events are causes, and epiphenomenalism is thus false.
This response to the ‘no gap’ argument, however, prompts a concern about the relevance of mental properties or kinds to causal relations. And in 1925 C.D. Broad tells us that the view that mental events are epiphenomenal is the view that mental events either (a) do not function at all as causal-factors, or hat (b) if they do, they do so in virtue of their physiological characteristics and not in virtue of their mental characteristics.
Classicists are motivated (in part) by properties thought seems to share with language. Jerry Alan Fodor's (1935-), Language of Thought Hypothesis, (Fodor 1975, 1987), according to which the system of mental symbols constituting the neural basis of thought is structured like a language, provides a well-worked-out version of the classical approach as applied to commonsense psychology. According to the language of a thought hypothesis, the potential infinity of complex representational mental states is generated from a finite stock of primitive representational states, in accordance with recursive formation rules. This combinatorial structure accounts for the properties of productivity and systematicity of the system of mental representations. As in the case of symbolic languages, including natural languages (though Fodor does not suppose either that the language of thought hypotheses explains only linguistic capacities or that only verbal creatures have this sort of cognitive architecture), these properties of thought are explained by appeal to the content of the representational units and their combinability into contentful complexes. That is, the semantics of both language and thought is compositional: the content of a complex representation is determined by the contents of its constituents and their structural configuration.
Connectionists are motivated mainly by a consideration of the architecture of the brain, which apparently consists of layered networks of interconnected neurons. They argue that this sort of architecture is unsuited to carrying out classical serial computations. For one thing, processing in the brain is typically massively parallel. In addition, the elements whose manipulation drive's computation in Conceptionist networks (principally, the connections between nodes) are neither semantically compositional nor semantically evaluable, as they are on the classical approach. This contrast with classical computationalism is often characterized by saying that representation is, with respect to computation, distributed as opposed to local: representation is local if it is computationally basic; and distributed if it is not. (Another way of putting this is to say that for classicists mental representations are computationally atomic, whereas for connectionists they are not.)
Moreover, connectionists argue that information processing as it occurs in Conceptionist networks more closely resembles some features of actual human cognitive functioning. For example, whereas on the classical view learning involves something like hypothesis formation and testing (Fodor 1981), on the Conceptionist model it is a matter of evolving distribution of 'weight' (strength) on the connections between nodes, and typically does not involve the formulation of hypotheses regarding the identity conditions for the objects of knowledge. The Conceptionist network is 'trained up' by repeated exposure to the objects it is to learn to distinguish; and, though networks typically require many more exposures to the objects than do humans, this seems to model at least one feature of this type of human learning quite well.
Further, degradation in the performance of such networks in response to damage is gradual, not sudden as in the case of a classical information processor, and hence more accurately models the loss of human cognitive function as it typically occurs in response to brain damage. It is also sometimes claimed that Conceptionist systems show the kind of flexibility in response to novel situations typical of human cognition - situations in which classical systems are relatively 'brittle' or 'fragile.'
Some philosophers have maintained that Connectionism entails that there are no propositional attitudes. Ramsey, Stich and Garon (1990) have argued that if Conceptionist models of cognition are basically correct, then there are no discrete representational states as conceived in ordinary commonsense psychology and classical cognitive science. Others, however (e.g., Smolensky 1989), hold that certain types of higher-level patterns of activity in a neural network may be roughly identified with the representational states of commonsense psychology. Still others argue that language-of-thought style representation is both necessary in general and realizable within Conceptionist architectures, collect the central contemporary papers in the classicist/Conceptionist debate, and provides useful introductory material as well.
Whereas Stich (1983) accepts that mental processes are computational, but denies that computations are sequences of mental representations, others accept the notion of mental representation, but deny that computational theory of mind provides the correct account of mental states and processes.
Van Gelder (1995) denies that psychological processes are computational. He argues that cognitive systems are dynamic, and that cognitive states are not relations to mental symbols, but quantifiable states of a complex system consisting of (in the case of human beings) a nervous system, a body and the environment in which they are embedded. Cognitive processes are not rule-governed sequences of discrete symbolic states, but continuous, evolving total states of dynamic systems determined by continuous, simultaneous and mutually determining states of the systems components. Representation in a dynamic system is essentially information-theoretic, though the bearers of information are not symbols, but state variables or parameters.
Horst (1996), on the other hand, argues that though computational models may be useful in scientific psychology, they are of no help in achieving a philosophical understanding of the intentionality of commonsense mental states. Computational theory of mind attempts to reduce the intentionality of such states to the intentionality of the mental symbols they are relations to. But, Horst claims, the relevant notion of symbolic content is essentially bound up with the notions of convention and intention. So the computational theory of mind involves itself in a vicious circularity: the very properties that are supposed to be reduced are (tacitly) appealed to in the reduction.
To say that a mental object has semantic properties is, paradigmatically, to say that it may be about, or be true or false of, an object or objects, or that it may be true or false simpliciter. Suppose I think that you took to sniffing snuff. I am thinking about you, and if what I think of you (that they take snuff) is true of you, then my thought is true. According to representational theory of mind such states are to be explained as relations between agents and mental representations. To think that you take snuff is to token in some way a mental representation whose content is that ocelots take snuff. On this view, the semantic properties of mental states are the semantic properties of the representations they are relations to.
Linguistic acts seem to share such properties with mental states. Suppose I say that you take snuff. I am talking about you, and if what I say of you (that they take snuff) is true of them, then my utterance is true. Now, to say that you take snuff is (in part) to utter a sentence that means that you take snuff. Many philosophers have thought that the semantic properties of linguistic expressions are inherited from the intentional mental states they are conventionally used to express. On this view, the semantic properties of linguistic expressions are the semantic properties of the representations that are the mental relata of the states they are conventionally used to express.
It is also widely held that in addition to having such properties as reference, truth-conditions and truth - so-called extensional properties - expressions of natural languages also have intensional properties, in virtue of expressing properties or propositions - i.e., in virtue of having meanings or senses, where two expressions may have the same reference, truth-conditions or truth value, yet express different properties or propositions (Frége 1892/1997). If the semantic properties of natural-language expressions are inherited from the thoughts and concepts they express (or vice versa, or both), then an analogous distinction may be appropriate for mental representations.
Theories of representational content may be classified according to whether they are atomistic or holistic and according to whether they are externalistic or internalistic, whereby, emphasizing the priority of a whole over its parts. Furthermore, in the philosophy of language, this becomes the claim that the meaning of an individual word or sentence can only be understood in terms of its relation to an indefinitely larger body of language, such as à whole theory, or even a whole language or form of life. In the philosophy of mind a mental state similarly may be identified only in terms of its relations with others. Moderate holism may allow the other things besides these relationships also count; extreme holism would hold that a network of relationships is all that we have. A holistic view of science holds that experience only confirms or disconfirms large bodies of doctrine, impinging at the edges, and leaving some leeway over the adjustment that it requires.
Once, again, in the philosophy of mind and language, the view that what is thought, or said, or experienced, is essentially dependent on aspects of the world external to the mind of the subject. The view goes beyond holding that such mental states are typically caused by external factors, to insist that they could not have existed as they now do without the subject being embedded in an external world of a certain kind. It is these external relations that make up the essence or identify of the mental state. Externalism is thus opposed to the Cartesian separation of the mental from the physical, since that holds that the mental could in principle exist as it does even if there were no external world at all. Various external factors have been advanced as ones on which mental content depends, including the usage of experts, the linguistic, norms of the community. And the general causal relationships of the subject. In the theory of knowledge, externalism is the view that a person might know something by being suitably situated with respect to it, without that relationship being in any sense within his purview. The person might, for example, be very reliable in some respect without believing that he is. The view allows that you can know without being justified in believing that you know.
However, atomistic theories take a representation's content to be something that can be specified independent entity of that representation' s relations to other representations. What the American philosopher of mind, Jerry Alan Fodor (1935-) calls the crude causal theory, for example, takes a representation to be a
cow
- menial representation with the same content as the word 'cow' - if its tokens are caused by instantiations of the property of being-a-cow, and this is a condition that places no explicit constraints on how
cow
's must or might relate to other representations. Holistic theories contrasted with atomistic theories in taking the relations à representation bears to others to be essential to its content. According to functional role theories, a representation is a
cow
if it behaves like a
cow
should behave in inference.
Internalist theories take the content of a representation to be a matter determined by factors internal to the system that uses it. Thus, what Block (1986) calls 'short-armed' functional role theories are Internalist. Externalist theories take the content of a representation to be determined, in part at least, by factors external to the system that uses it. Covariance theories, as well as telelogical theories that invoke an historical theory of functions, take content to be determined by 'external' factors. Crossing the atomist-holistic distinction with the Internalist-externalist distinction.
Externalist theories (sometimes called non-individualistic theories) have the consequence that molecule for molecule identical cognitive systems might yet harbour representations with different contents. This has given rise to a controversy concerning 'narrow' content. If we assume some form of externalist theory is correct, then content is, in the first instance 'wide' content, i.e., determined in part by factors external to the representing system. On the other hand, it seems clear that, on plausible assumptions about how to individuate psychological capacities, internally equivalent systems must have the same psychological capacities. Hence, it would appear that wide content cannot be relevant to characterizing psychological equivalence. Since cognitive science generally assumes that content is relevant to characterizing psychological equivalence, philosophers attracted to externalist theories of content have sometimes attempted to introduce 'narrow' content, i.e., an aspect or kind of content that is equivalent internally equivalent systems. The simplest such theory is Fodor's idea (1987) that narrow content is a function from contents (i.e., from whatever the external factors are) to wide contents.
All the same, what a person expresses by a sentence is often a function of the environment in which he or she is placed. For example, the disease I refer to by the term like 'arthritis', or the kind of tree I refer to as a 'Maple' will be defined by criteria of which I know next to nothing. This raises the possibility of imagining two persons in rather different environments, but in which everything appears the same to each of them. The wide content of their thoughts and sayings will be different if the situation surrounding them is appropriately different: 'Situation' may include the actual objects they perceive or the chemical or physical kinds of object in the world they inhabit, or the history of their words, or the decisions of authorities on what counts as an example, of one of the terms they use. The narrow content is that part of their thought which remains identical, through their identity of the way things appear, regardless of these differences of surroundings. Partisans of wide content may doubt whether any content in this sense narrow, partisans of narrow content believer that it is the fundamental notion, with wide content being explicable in terms of narrow content plus context.
Even so, the distinction between facts and values has outgrown its name: it applies not only to matters of fact vs, matters of value, but also to statements that something is, vs. statements that something ought to be. Roughly, factual statements - 'is statements' in the relevant sense - represent some state of affairs as obtaining, whereas normative statements - evaluative, and deontic ones - attribute goodness to something, or ascribe, to an agent, an obligation to act. Neither distinction is merely linguistic. Specifying a book's monetary value is making a factual statement, though it attributes a kind of value. 'That is a good book' expresses a value judgement though the term 'value' is absent (nor would 'valuable' be synonymous with 'good'). Similarly, 'we are morally obligated to fight' superficially expresses a statement, and 'By all indications it ough to rain' makes a kind of ought-claim; but the former is an ought-statement, the latter an (epistemic) is-statement.
Theoretical difficulties also beset the distinction. Some have absorbed values into facts holding that all value is instrumental, roughly, to have value is to contribute - in a factual analysable way - to something further which is (say) deemed desirable. Others have suffused facts with values, arguing that facts (and observations) are 'theory-impregnated' and contending that values are inescapable to theoretical choice. But while some philosophers doubt that fact/value distinctions can be sustained, there persists a sense of a deep difference between evaluating, and attributing an obligation and, on the other hand, saying how the world is.
Fact/value distinctions, may be defended by appeal to the notion of intrinsic value, as a thing has in itself and thus independently of its consequences. Roughly, a value statement (proper) is an ascription of intrinsic value, one to the effect that a thing is to some degree good in itself. This leaves open whether ought-statements are implicitly value statements, but even if they imply that something has intrinsic value - e.g., moral value - they can be independently characterized, say by appeal to rules that provide (justifying) reasons for action. One might also ground the fact value distinction in the attributional (or even motivational) component apparently implied by the making of evaluational or deontic judgements: Thus, 'it is a good book, but that is no reason for a positive attribute towards it' and 'you ought to do it, but there is no reason to' seem inadmissible, whereas, substituting, 'an expensive book' and 'you will do it' yields permissible judgements. One might also argue that factual judgements are the kind which are in principle appraisable scientifically, and thereby anchor the distinction on the factual side. This ligne is plausible, but there is controversy over whether scientific procedures are 'value-free' in the required way.
Philosophers differ regarding the sense, if any, in which epistemology is normative (roughly, valuational). But what precisely is at stake in this controversy is no clearly than the problematic fact/value distinction itself. Must epistemologists as such make judgements of value or epistemic responsibility? If epistemology is naturalizable, then even epistemic principles simply articulate under what conditions - say, appropriate perceptual stimulations - a belief is justified, or constitutes knowledge. Its standards of justification, then would be like standards of, e.g., resilience for bridges. It is not obvious, however, that there appropriate standards can be established without independent judgements that, say, a certain kind of evidence is good enough for justified belief (or knowledge). The most plausible view may be that justification is like intrinsic goodness, though it supervenes on natural properties, it cannot be analysed wholly in factual statements.
Thus far, belief has been depicted as being all-or-nothing, however, as a resulting causality for which we have grounds for thinking it true, and, all the same, its acceptance is governed by epistemic norms, and, least of mention, it is partially subject to voluntary control and has functional affinities to belief. Still, the notion of acceptance, like that of degrees of belief, merely extends the standard picture, and does not replace it.
Traditionally, belief has been of epistemological interest in its propositional guise: 'S' believes that 'p', where 'p' is a reposition towards which an agent, 'S' exhibits an attitude of acceptance. Not all belief is of this sort. If I trust you to say, I believer you. And someone may believe in Mr. Radek, or in a free-market economy, or in God. It is sometimes supposed that all belief is 'reducible' to propositional belief, belief-that. Thus, my believing you might be thought a matter of my believing, is, perhaps, that what you say is true, and your belief in free markets or God, is a matter of your believing that free-market economies are desirable or that God exists.
Some philosophers have followed St. Thomas Aquinas (1225-74), in supposing that to believer in God is simply to believer that certain truths hold while others argue that belief-in is a distinctive attitude, on that includes essentially an element of trust. More commonly, belief-in has been taken to involve a combination of propositional belief together with some further attitude.
The moral philosopher Richard Price (1723-91) defends the claim that there are different sorts of belief-in, some, but not all reducible to beliefs-that. If you believer in God, you believer that God exists, that God is good, you believer that God is good, etc. But according to Price, your belief involves, in addition, a certain complex pro-attitude toward its object. Even so, belief-in outruns the evidence for the corresponding belief-that. Does this diminish its rationality? If belief-in presupposes believes-that, it might be thought that the evidential standards for the former must be, at least, as high as standards for the latter. And any additional pro-attitude might be thought to require a further layer of justification not required for cases of belief-that.
Belief-in may be, in general, less susceptible to alternations in the face of unfavourable evidence than belief-that. A believer who encounters evidence against God's existence may remain unshaken in his belief, in part because the evidence does not bear on his pro-attitude. So long as this ids united with his belief that God exists, and reasonably so - in a way that an ordinary propositional belief that would not.
The correlative way of elaborating on the general objection to justificatory externalism challenges the sufficiency of the various externalist conditions by citing cases where those conditions are satisfied, but where the believers in question seem intuitively not to be justified. In this context, the most widely discussed examples have to do with possible occult cognitive capacities, like clairvoyance. Considering the point in application once, again, to reliabilism, the claim is that to think that he has such a cognitive power, and, perhaps, even good reasons to the contrary, is not rational or responsible and therefore not epistemically justified in accepting the belief that result from his clairvoyance, dispite the fact that the reliablist condition is satisfied.
One sort of response to this latter sorts of an objection is to 'bite the bullet' and insist that such believers are in fact justified, dismissing the seeming intuitions to the contrary as latent Internalist prejudice. A more widely adopted response attempts to impose additional conditions, usually of a roughly Internalist sort, which will rule out the offending example, while stopping far of a full internalism. But, while there is little doubt that such modified versions of externalism can handle particular cases, as well enough to avoid clear intuitive implausibility, the usually problematic cases that they cannot handle, and also whether there is and clear motivation for the additional requirements other than the general Internalist view of justification that externalist is committed to reject.
A view in this same general vein, one that might be described as a hybrid of internalism and externalism holds that epistemic justification requires that there is a justicatory factor that is cognitively accessible to the believer in question (though it need not be actually grasped), thus ruling out, e.g., a pure reliabilism. At the same time, however, though it must be objectively true that beliefs for which such a factor is available are likely to be true, in addition, the fact need not be in any way grasped or cognitively accessible to the believer. In effect, of the premises needed to argue that a particular belief is likely to be true, one must be accessible in a way that would satisfy at least weak internalism, the Internalist will respond that this hybrid view is of no help at all in meeting the objection and has no belief nor is it held in the rational, responsible way that justification intuitively seems to require, for the believer in question, lacking one crucial premise, still has no reason at all for thinking that his belief is likely to be true.
An alternative to giving an externalist account of epistemic justification, one which may be more defensible while still accommodating many of the same motivating concerns, is to give an externalist account of knowledge directly, without relying on an intermediate account of justification. Such a view will obviously have to reject the justified true belief account of knowledge, holding instead that knowledge is true belief which satisfies the chosen externalist condition, e.g., a result of a reliable process (and perhaps, further conditions as well). This makes it possible for such a view to retain Internalist account of epistemic justification, though the centrality of that concept to epistemology would obviously be seriously diminished.
Such an externalist account of knowledge can accommodate the commonsense conviction that animals, young children, and unsophisticated adults' posse's knowledge, though not the weaker conviction (if such a conviction does exist) that such individuals are epistemically justified in their beliefs. It is, at least, less vulnerable to Internalist counter-examples of the sort discussed, since the intuitions involved there pertain more clearly to justification than to knowledge. What is uncertain is what ultimate philosophical significance the resulting conception of knowledge, for which is accepted or advanced as true or real on the basis of less than conclusive evidence, as can only be assumed to have. In particular, does it have any serious bearing on traditional epistemological problems and on the deepest and most troubling versions of scepticism, which seems in fact to be primarily concerned with justification, and knowledge?`
A rather different use of the terms 'internalism' and 'externalism' have to do with the issue of how the content of beliefs and thoughts is determined: According to an Internalist view of content, the content of such intention states depends only on the non-relational, internal properties of the individual's mind or grain, and not at all on his physical and social environment: While according to an externalist view, content is significantly affected by such external factors and suggests a view that appears of both internal and external elements are standardly classified as an external view.
As with justification and knowledge, the traditional view of content has been strongly Internalist in character. The main argument for externalism derives from the philosophy y of language, more specifically from the various phenomena pertaining to natural kind terms, indexicals, etc. that motivate the views that have come to be known as 'direct reference' theories. Such phenomena seem at least to show that the belief or thought content that can be properly attributed to a person is dependant on facts about his environment, e.g., whether he is on Earth or Twin Earth, what is fact pointing at, the classificatory criterion employed by expects in his social group, etc. - not just on what is going on internally in his mind or brain.
An objection to externalist account of content is that they seem unable to do justice to our ability to know the content of our beliefs or thought 'from the inside', simply by reflection. If content is depending on external factors pertaining to the environment, then knowledge of content should depend on knowledge of these factors - which will not in general be available to the person whose belief or thought is in question.
The adoption of an externalist account of mental content would seem to support an externalist account of justification, apart from all contentful representation is a belief inaccessible to the believer, then both the justifying statuses of other beliefs in relation to that of the same representation are the status of that content, being totally rationalized by further beliefs for which it will be similarly inaccessible. Thus, contravening the Internalist requirement for justification, as an Internalist must insist that there are no justification relations of these sorts, that our internally associable content can also not be warranted or as stated or indicated without the deviated departure from a course or procedure or from a norm or standard in showing no deviation from traditionally held methods of justification exacting by anything else: But such a response appears lame unless it is coupled with an attempt to show that the externalised account of content is mistaken.
Except for alleged cases of thing s that are evident for one just by being true, it has often been thought, anything is known must satisfy certain criteria as well as being true. Except for alleged cases of self-evident truths, it is often thought that anything that is known must satisfy certain criteria or standards. These criteria are general principles that will make a proposition evident or just make accepting it warranted to some degree. Common suggestions for this role include position ‘p’, e.g., that 2 + 2 = 4, ‘p’ is evident or, if ‘p’ coheres wit h the bulk of one’s beliefs, ‘p’ is warranted. These might be criteria whereby putative self-evident truths, e.g., that one clearly and distinctly conceive s ‘p’, ‘transmit’ the status as evident they already have without criteria to other proposition s like ‘p’, or they might be criteria whereby purely non-epistemic considerations, e.g., facts about logical connections or about conception that need not be already evident or warranted, originally ‘create’ p’s epistemic status. If that in turn can be ‘transmitted’ to other propositions, e.g., by deduction or induction, there will be criteria specifying when it is.
Nonetheless, of or relating to tradition a being previously characterized or specified to convey an idea indirectly, as an idea or theory for consideration and being so extreme a design or quality and lean towards an ecocatorial suggestion that implicate an involving responsibility that include: (1) if a proposition ‘p’, e.g., that 2 + 2 = 4, is clearly and distinctly conceived, then ‘p’ is evident, or simply, (2) if we can’t conceive ‘p’ to be false, then ‘p’ is evident: Or, (3) whenever are immediately conscious o f in thought or experience, e.g,, that we seem to see red, is evident. These might be criteria whereby putative self-evident truth s, e.g., that one clearly and distinctly conceives, e.g., that one clearly and distinctly conceives ‘p’, ‘transmit’ the status as evident they already have for one without criteria to other propositions like ‘p’. Alternatively, they might be criteria whereby epistemic status, e.g., p’s being evident, is originally created by purely non-epistemic considerations, e.g., facts about how ‘p’ is conceived which are neither self-evident is already criterial evident.
The result effect, holds that traditional criteria do not seem to make evident propositions about anything beyond our own thoughts, experiences and necessary truths, to which deductive or inductive criteria ma y be applied. Moreover, arguably, inductive criteria, including criteria warranting the best explanation of data, never make things evident or warrant their acceptance enough to count as knowledge.
Contemporary epistemologists suggest that traditional criteria may need alteration in three ways. Additional evidence may subject even our most basic judgements to rational correction, though they count as evident on the basis of our criteria. Warrant may be transmitted other than through deductive and inductive relations between propositions. Transmission criteria might not simply ‘pass’ evidence on linearly from a foundation of highly evident ‘premisses’ to ‘conclusions’ that are never more evident.
A group of statements, some of which purportedly provide support for another. The statements which purportedly provide the support are the premisses while the statement purportedly support is the conclusion. Arguments are typically divided into two categories depending on the degree of support they purportedly provide. Deductive arguments purportedly provide conclusive support for their conclusions while inductively supports the purported provision that inductive arguments purportedly provided only arguments purportedly in the providing probably of support. Some, but not all, arguments succeed in providing support for their conclusions. Successful deductive arguments are valid while successful inductive arguments are valid while successful inductive arguments are strong. An argument is valid just in case if all its premisses are true its conclusion is schematically doubtlessly true. Deductive logic provides methods for ascertaining whether or not an argument is valid whereas, inductive logic provides methods for ascertaining the degree of support the premisses of an argument confer on its conclusion.
The word modern in philosophy originally meant “new,” distinguishing a new historic era both from antiquity and from the intervening Middle Ages. Many things had occurred in the intellectual, religious, political, and social life of Europe to justify the belief of 16th- and 17th-century thinkers in the genuinely new character of their times. The explorations of the world; the Protestant Reformation, with its emphasis on individual faith; the rise of commercial urban society; and the dramatic appearance during the Renaissance of new ideas in all areas of culture stimulated the development of a new philosophical world-view.
The medieval view of the world as a hierarchical order of beings created and governed by God was supplanted by the mechanistic picture of the world as a vast machine, the parts of which move in accordance with strict physical laws, without purpose or will. In this view of the universe, known as Mechanism, science took precedence over spirituality, and the surrounding physical world that we experience and observe received as much, if not more, attention than the world to come. The aim of human life was no longer conceived as preparation for salvation in the next world, but rather as the satisfaction of people’s natural desires. Political institutions and ethical principles ceased to be regarded as reflections of divine command and came to be seen as practical devices created by humans.
The human mind itself seemed an inexhaustible reality, on a par with the physical reality of matter. Modern philosophers had the task of defining more clearly the essence of mind and of matter, and of reasoning about the relation between the two. Individuals ought to see for themselves, they believed, and study the “book of Nature,” and in every case search for the truth with their own reason.
Since the 15th century modern philosophy has been marked by a continuing interaction between systems of thought based on a mechanistic, materialistic interpretation of the universe and those founded on a belief in human thought as the only ultimate reality. This interaction has reflected the increasing effect of scientific discovery and political change on philosophical speculation.
This painting from the 19th century depicts Italian scientist Galileo at the Vatican in Rome in the 17th century. Galileo was forced to stand trial for his belief in Copernicanism, or the idea that Earth moves around the Sun. The Roman Catholic Church forced Galileo to publicly denounce Copernicanism and spend the rest of his life under house arrest.
In the new philosophical climate, experience and reason became the sole standards of truth. The first great spokesperson for the new philosophy was the English philosopher and statesman Francis Bacon, who denounced reliance on authority and verbal argument and criticized Aristotelian logic as useless for the discovery of new laws. Bacon called for a new scientific method based on reasoned generalization from careful observation and experiment. He was the first to formulate rules for this new method of drawing conclusions, now known as inductive inference.
English philosopher and statesman Sir Francis Bacon’s philosophical treatise, Novum Organum (1620), is regarded as an important contribution to scientific methodology. In this work Bacon advanced the necessity of experimentation and accurate observation. Writing in aphorisms (concise statements of principle), Bacon outlined four types of false notions or methods that impede the ability to study nature impartially. He labelled these notions the Idols of the Tribe, the Idols of the Cave, the Idols of the Market-place, and the Idols of the Theatre. Novum Organum greatly influenced the later empiricists, including English philosopher John Locke.
The work of Italian physicist and astronomer Galileo was of even greater importance in the development of a new world-view. Galileo brought attention to the importance of applying mathematics to the formulation of scientific laws. This he accomplished by creating the science of mechanics, which applied the principles of geometry to the motions of bodies. The success of mechanics in discovering reliable and useful laws of nature suggested to Galileo and to later scientists that all nature is designed in accordance with mechanical laws.
The struggle between the Roman Catholic Church and 17th-century Italian physicist and astronomer Galileo has become symbolic of the clash between authority and intellectual freedom, but Galileo himself did not foresee any conflict. Using one of the first telescopes, Galileo found evidence to support the Copernican theory that the Earth and the other planets revolved around the Sun. Galileo believed that his scientific findings fell far outside the theological realm. Author Stillman Drake explores Galileo’s shock and disbelief as his disagreement with the church escalated.
These great changes of the 15th and 16th centuries brought about two intellectual crises that profoundly affected Western civilization. First, the decline of Aristotelian science called into question the methods and foundations of the sciences. This decline came about for a number of reasons including the inability of Aristotelian principles to explain new observations in astronomy. Second, new attitudes toward religion undermined religious authority and gave agnostic and atheistic ideas a chance to be heard.
French philosopher and mathematician René Descartes (1596-1650) is sometimes called the father of modern philosophy. In 1649 Descartes was invited by Queen Christina of Sweden to Stockholm to instruct the queen in philosophy. Although treated well by the queen, he was unaccustomed to the cold of Swedish winters and died of pneumonia the following year.
During the 17th century French mathematician, physicist, and philosopher René Descartes attempted to resolve both crises. He followed Bacon and Galileo in criticizing existing methods and beliefs, but whereas Bacon had argued for an inductive method based on observed facts, Descartes made mathematics the model for all science. Descartes championed the truth contained in the “clear and distinct ideas” of reason itself. The advance toward knowledge was from one such truth to another, as in mathematical reasoning. Descartes believed that by following his rationalist method, one could establish first principles (fundamental underlying truths) for all knowledge—about man, the world, and even God.
The 17th century French scientist and mathematician René Descartes was also one of the most influential thinkers in Western philosophy. Descartes stressed the importance of skepticism in thought and proposed the idea that existence had a dual nature: one physical, the other mental. The latter concept, known as Cartesian dualism, continues to engage philosophers today. This passage from Discourse on Method (first published in his Philosophical Essays in 1637) contains a summary of his thesis, which includes the celebrated phrase “I think, therefore I am.”
Descartes resolved to reconstruct all human knowledge on an absolutely certain foundation by refusing to accept any belief, even the belief in his own existence, until he could prove it to be necessarily true. In his so-called dream argument, he argued that our inability to prove with certainty when we are awake and when we are dreaming makes most of our knowledge uncertain. Ultimately he concluded that the first thing of whose existence one can be certain are oneself as a thinking being. This conclusion forms the basis of his well-known argument, “Cogito, ergo sum” (“I think, therefore I am”). He also argued that, in pure thought, one has a clear conception of God and can demonstrate that God exists. Descartes argued that secure knowledge of the reality of God allowed him to have his earlier doubts about knowledge and science.
French thinker René Descartes applied rigorous scientific methods of deduction to his exploration of philosophical questions. Descartes is probably best known for his pioneering work in philosophical skepticism. Author Tom Sorell examines the concepts behind Descartes’s work Meditationes de Prima Philosophia (1641; Meditations on First Philosophy), focussing on its unconventional use of logic and the reactions it aroused.
Despite his mechanistic outlook, Descartes accepted the traditional religious doctrine of the immortality of the soul and maintained that mind and body are two distinct substances, thus exempting mind from the mechanistic laws of nature and providing for freedom of the will. His fundamental separation of mind and body, known as dualism, raised the problem of explaining how two such different substances as mind and body can affect each other, a problem he was unable to solve since which has remained a concern of philosophy ever. Descartes’s thought launched an era of speculation in metaphysics as philosophers made a determined effort to overcome dualism -the belief in the irreconcilable difference between mind and matter - and obtain unity. The separation of mind and matter is also known as Cartesian dualism after Descartes.
The 17th–century English philosopher Thomas Hobbes, in his effort to attain unity, asserted that matter is the only real substance. He constructed a comprehensive system of metaphysics that provided a solution to the mind-body problem by reducing mind to the internal motions of the body. He also argued that there is no contradiction between human freedom and causal determinism—the view that every act is determined by a prior cause. Both, according to Hobbes, work in accordance with the mechanical laws that govern the universe.
Seventeenth-century English philosopher Thomas Hobbes’s view of human nature is often characterized as deeply pessimistic. In the famous phrase from Leviathan (1651), Hobbes’s best–known work, “the life of man is nasty, brutish, and short.” This excerpt from a study of Hobbes’s work by author Richard Tuck makes it clear, however, that Hobbes developed his theory of human morality and social relations from the humanist tradition prevalent among intellectuals of his time.
In his ethical theory Hobbes derived the rules of human behaviour from the law of self-preservation and justified egoistic action as the natural human tendency. In his political theory he maintained that government and social justice are artificial creations based on social contract (voluntary agreement between people and their government) and maintained by force. In his most famous work, Leviathan (1651), Hobbes justified political authority on the basis that self-interested people who existed in a terrifying “state of nature” - that is, without a ruler - would seek to protect themselves by forming a political commonwealth that had rules and regulations. He concluded that absolute monarchy is the most effective means of preserving peace.
A member of the rationalist school of philosophy, Baruch Spinoza pursued knowledge through deductive reasoning rather than induction from sensory experience. Spinoza applied the theoretical method of mathematics to other realms of inquiry. Following the format of Euclid’s Elements, Spinoza’s Ethics organized morality and religion into definitions, axioms, and postulates.
Whereas Hobbes tried to oppose Cartesian dualism by reducing mind to matter, the 17th-century Dutch philosopher Baruch Spinoza attempted to reduce matter to divine spiritual substance. He constructed a remarkably precise and rigorous system of philosophy that offered new solutions to the mind-body problem and to the conflict between religion and science. Like Descartes, Spinoza maintained that the entire structure of nature can be deduced from a few basic definitions and axioms, on the model of Euclidean geometry. However, Spinoza believed that Descartes’s theory of two substances created an insoluble problem of the way in which mind and bodies interact. He concluded that the ultimate substance is God and that God, substance, and nature are identical. Thus he supported the pantheistic view that all things are aspects or modes of God.
Dutch philosopher Baruch Spinoza is regarded as the foremost Western proponent of pantheism, the belief that God and nature are one and the same. This idea is the central thesis of Spinoza’s most famous and influential work, the 1674 Ethica Ordine Geometrico Demonstrata (Ethics Demonstrated with Geometrical Order). Author Roger Scruton examines Spinoza’s assertion that God is the “substance” of everything.
Spinoza’s solution to the mind-body problem explained the apparent interaction of mind and body by regarding them as two forms of the same substance, which exactly parallel each other, thus seeming to affect each other but not really doing so. Spinoza’s ethics, like the ethics of Hobbes, was based on materialistic psychology according to which individuals are motivated only by self-interest. But in contrast with Hobbes, Spinoza concluded that rational self-interest coincides with the interest of others.
English philosopher John Locke explained his theory of empiricism, a philosophical doctrine holding that all knowledge is based on experience, in An Essay Concerning Human Understanding (1690). Locke believed the human mind to be a blank slate at birth that gathered all its information from its surroundings - starting with simple ideas and combining these simple ideas into more complex ones. His theory greatly influenced education in Great Britain and the United States. Locke believed that education should begin in early childhood and should proceed gradually as the child learns increasingly complex ideas.
English philosopher John Locke responded to the challenge of Cartesian dualism by supporting a commonsense view that the corporeal (bodily or material) and the spiritual are simply two parts of nature that remain always present in human experience. He made no attempt rigorously to define these parts of nature or to construct a detailed system of metaphysics that attempted to explain them; Locke believed that such philosophical aims were impossible to carry out and thus pointless. Against the rationalism of Descartes and Spinoza, who believed in the ability to achieve knowledge through reasoning and logical deduction, Locke continued the empiricist tradition begun by Bacon and embraced by Hobbes. The empiricists believed that knowledge came from observation and sense perceptions rather than from reason alone.
In 1690 Locke gave empiricism a systematic framework with the publication of his Essay Concerning Human Understanding. Of particular importance was Locke’s redirection of philosophy away from the study of the physical world and toward the study of the human mind. In so doing he made epistemology, the study of the nature of knowledge, the principal concern of philosophy in the 17th and 18th century. In his own theory of the mind Locke attempted to reduce all ideas to simple elements of experience, but he distinguished sensation and reflection as sources of experience, sensation providing the material for knowledge of the external world, and reflection the material for knowledge of the mind.
Locke greatly influenced the skepticism of later British thinkers, such as George Berkeley and David Hume, by recognizing the vagueness of the concepts of metaphysics and by pointing out that inferences about the world outside the mind cannot be proved with certainty. His ethical and political writings had an equally great influence on subsequent thought. During the late 18th century the founders of the modern school of utilitarianism, which makes happiness for the largest possible number of people the standard of right and wrong, drew heavily on the writings of Locke. His defence of constitutional government, religious tolerance, and natural human rights influenced the development of liberal thought during the late 18th century in France and the United States as well as in Great Britain.
Efforts to resolve the dualism of mind and matter, a problem first raised by Descartes, continued to engage philosophers during the 17th and 18th centuries. The division between science and religious belief also occupied them. There, the aim was to preserve the essentials of faith in God while at the same time defending the right to think freely. One view called Deism saw God as the cause of the great mechanism of the world, a view more in harmony with science than with traditional religion. Natural science at this time was striding ahead, relying on sense perception as well as reason, and thereby discovering the universal laws of nature and physics. Such empirical (observation-based) knowledge appeared to be more certain and valuable than philosophical knowledge based upon reason alone.
After Locke philosophers became more sceptical about achieving knowledge that they could be certain was true. Some thinkers who despaired of finding a resolution to dualism embraced skepticism, the doctrine that true knowledge, other than what we experience through the senses, are impossible. Others turned to increasingly radical theories of being and knowledge. Among them was German philosopher Immanuel Kant, probably the most influential of all because he set Western philosophy on a new path that it still follows today. Kant’s view that knowledge of the world is dependent upon certain innate categories or ideas in the human mind is known as idealism.
Nevertheless, finding to a theory that magnifies the role of decisions, or free selection from among equally possible alternatives, in order to show that what appears to be objective or fixed by nature is in fact an artefact of human convention, similar to conventions of etiquette, or grammar, or law. Thus one might suppose that moral rules owe more to social convention than to anything imposed from outside, or hat supposedly inexorable necessities are in fact the shadow of our linguistic conventions. The disadvantage of conventionalism is that it must show that alternative, equally workable conventions could have been adopted, and it is often easy to believe that, for example, if we hold that some ethical norm such as respect for promises or property is conventional, we ought to be able to show that human needs would have been equally well satisfied by a system involving a different norm, and this may be hard to establish.
A convention also suggested by Paul Grice (1913-88) directing participants in conversation to pay heed to an accepted purpose or direction of the exchange. Contributions made without paying this attention are liable to be rejected for other reasons than straightforward falsity: Something true but unhelpful or inappropriate may meet with puzzlement or rejection. We can thus never infer fro the fact that it would be inappropriate to say something in some circumstance that what would be aid, were we to say it, would be false. This inference was frequently and in ordinary language philosophy, it being argued, for example, that since we do not normally say ‘there sees to be a barn there’ when there is unmistakably a barn there, it is false that on such occasions there seems to be a barn there.
There are two main views on the nature of theories. According to the ‘received view’ theories are partially interpreted axiomatic systems, according to the semantic view, a theory is a collection of models (Suppe, 1974). However, a natural language comes ready interpreted, and the semantic problem is no that of the specification but of understanding the relationship between terms of various categories (names, descriptions, predicates, adverbs . . .) and their meanings. An influential proposal is that this relationship is best understood by attempting to provide a ‘truth definition’ for the language, which will involve giving terms and structure of different kinds have on the truth-condition of sentences containing them.
The axiomatic method . . . as, . . . a proposition lid down as one from which we may begin, an assertion that we have taken as fundamental, at least for the branch of enquiry in hand. The axiomatic method is that of defining as a set of such propositions, and the ‘proof procedures’ or finding of how a proof ever gets started. Suppose I have as premises (1) p and (2) p ➞ q. Can I infer q? Only, it seems, if I am sure of, (3) (p & p ➞q) ➞q. Can I then infer q? Only, it seems, if I am sure that (4) (p & p ➞ q) ➞ q) ➞ q. For each new axiom (N) I need a further axiom (N + 1) telling me that the set so far implies q, and the regress never stops. The usual solution is to treat a system as containing not only axioms, but also rules of reference, allowing movement fro the axiom. The rule ‘modus ponens’ allow us to pass from the first two premises to q. Charles Dodgson Lutwidge (1832-98) better known as Lewis Carroll’s puzzle shows that it is essential to distinguish two theoretical categories, although there may be choice about which to put in which category.
This type of theory (axiomatic) usually emerges as a body of (supposes) truths that are not nearly organized, making the theory difficult to survey or study a whole. The axiomatic method is an idea for organizing a theory (Hilbert 1970): one tries to select from among the supposed truths a small number from which all others can be seen to be deductively inferable. This makes the theory rather more tractable since, in a sense, all the truths are contained in those few. In a theory so organized, the few truths from which all others are deductively inferred are called axioms. In that, just as algebraic and differential equations, which were used to study mathematical and physical processes, could they be made mathematical objects, so axiomatic theories, like algebraic and differential equations, which are means of representing physical processes and mathematical structures, could be made objects of mathematical investigation.
In the traditional (as in Leibniz, 1704), many philosophers had the conviction that all truths, or all truths about a particular domain, followed from a few principles. These principles were taken to be either metaphysically prior or epistemologically prior or in the fist sense, they were taken to be entities of such a nature that what exists is ‘caused’ by them. When the principles were taken as epistemologically prior, that is, as axioms, they were taken to be epistemologically privileged either, e.g., self-evident, not needing to be demonstrated or (again, inclusive ‘or’) to be such that all truths do follow from them (by deductive inferences). Gödel (1984) showed that treating axiomatic theories as themselves mathematical objects, that mathematics, and even a small part of mathematics, elementary number theory, could not be axiomatized, that, more precisely, any class of axioms that in such that we could effectively decide, of any proposition, whether or not it was in the class, would be too small to capture all of the truths.
The use of a model to test for the consistency of an axiomatized system is older than modern logic. Descartes’s algebraic interpretation of Euclidean geometry provides a way of showing that if the theory of real numbers is consistent, so is the geometry. Similar mapping had been used by mathematicians in the 19th century for example to show that if Euclidean geometry is consistent, so are various non-Euclidean geometries. Model theory is the general study of this kind of procedure: The study of interpretations of formal system. Proof theory studies relations of deductibility as defined purely syntactically, that is, without reference to the intended interpretation of the calculus. More formally, a deductively valid argument starting from true premises, that yields the conclusion between formulae of a system. But once the notion of an interpretation is in place we can ask whether a formal system meets certain conditions. In particular, can it lead us from sentences that are true under some interpretation to ones that are false under the same interpretation? And if a sentence is true under all interpretations, is it also a theorem of the system? We can define a notion of validity (a formula is valid if it is true in all interpretations) and semantic consequence (a formula, written as:
{A1 . . . An} ⊨ B
If it is true in all interpretations in which they are true) The central questions for a calculus will be whether all and only its theorems are valid, and whether {A1 . . . An} ⊨ B, if and only if {A1. . . . An} ⊢ B. These are the questions of the soundness and completeness of a formal system. For the propositional calculus this turns into the question of whether the proof theory delivers as theorems all and only tautologies. There are many axiomatizations of the propositional calculus that are consistent an complete. Gödel proved in 1929 that first-order predicate calculus is complete: any formula that is true under every interpretation is a theorem of the calculus.
The propositional calculus or logical calculus whose expressions are character representation sentences or propositions, and constants representing operations on those propositions to produce others of higher complexity. The operations include conjunction, disjunction, material implication and negation (although these need not be primitive). Propositional logic was partially anticipated by the Stoics but researched maturity only with the work of Frége, Russell, and Wittgenstein.
The concept introduced by Frége of a function taking a number of names as arguments, and delivering one proposition as the value. The idea is that ‘χ loves y’ is a propositional function, which yields the proposition ‘John loves Mary’ from those two arguments (in that order). A propositional function is therefore roughly equivalent to a property or relation. In Principia Mathematica, Russell and Whitehead take propositional functions to be the fundamental function, since the theory of descriptions could be taken as showing that other expressions denoting functions are incomplete symbols.
Keeping in mind, the two classical truth-values that a statement, proposition, or sentence can take. It is supposed in classical (two-valued) logic, that each statement has one of these values, and none has both. A statement is then false if and only if it is not true. The basis of this scheme is that to each statement there corresponds a determinate truth condition, or way the world must be for it to be true, and otherwise false. Statements may be felicitous or infelicitous in other dimensions (polite, misleading, apposite, witty, etc.) but truth is the central normative governing assertion. Considerations of vagueness may introduce greys into a black-and-white scheme. For the issue of whether falsity is the only way of failing to be true.
Formally, it is nonetheless, that any suppressed premise or background framework of thought necessary to make an argument valid, or a position tenable. More formally, a presupposition has been defined as a proposition whose truth is necessary for either the truth or the falsity of another statement. Thus, if ‘p’ presupposes ‘q’, ‘q’ must be true for p to be either true or false. In the theory of knowledge of Robin George Collingwood (1889-1943), any propositions capable of truth or falsity stand on a bed of ‘absolute presuppositions’ which are not properly capable of truth or falsity, since a system of thought will contain no way of approaching such a question. It was suggested by Peter Strawson (1919-), in opposition to Russell’s theory of ‘definite’ descriptions, that ‘there exists a King of France’ is a presupposition of ‘the King of France is bald’, the latter being neither true, nor false, if there is no King of France. It is, however, a little unclear whether the idea is that no statement at all is made in such a case, or whether a statement can made, but fails to present itself as true and opposes of either true is false. The former option preserves classical logic, since we can still say that every statement is either true or false, but the latter does not, since in classical logic the law of ‘bivalence’ holds, and ensures that nothing at all is presupposed for any proposition to be true or false. The introduction of presupposition therefore means that either a third truth-value is found, ‘intermediate’ between truth and falsity, or classical logic is preserved, but it is impossible to tell whether a particular sentence expresses a proposition that is a candidate for truth ad falsity, without knowing more than the formation rules of the language. Each suggestion carries costs, and there is some consensus that at least where definite descriptions are involved, examples like the one given are equally well handed by regarding the overall sentence false when the existence claim fails.
A proposition may be true or false it is said to take the truth-value true, and if the latter the truth-value false. The idea behind the term is the analogy between assigning a propositional variable one or other of these values, as a formula of the propositional calculus, and assigning an object as the value of many other variable. Logics with intermediate values are called many-valued logics. Then, a truth-function of a number of propositions or sentences is a function of them that has a definite truth-value, depends only on the truth-values of the constituents. Thus (p & q) is a combination whose truth-value is true when ‘p’ is true and ‘q’ is true, and false otherwise, ¬ p is a truth-function of ‘p’, false when ‘p’ is true and true when ‘p’ is false. The way in which the value of the whole is determined by the combinations of values of constituents is presented in a truth table.
In whatever manner, truths of fact cannot be reduced to any identity and our only way of knowing them is a posteriori, by reference to the facts of the empirical world.
February 10, 2010
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment